00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3940 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3534 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.060 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.060 The recommended git tool is: git 00:00:00.060 using credential 00000000-0000-0000-0000-000000000002 00:00:00.062 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.103 Fetching changes from the remote Git repository 00:00:00.105 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.161 Using shallow fetch with depth 1 00:00:00.161 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.161 > git --version # timeout=10 00:00:00.229 > git --version # 'git version 2.39.2' 00:00:00.229 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.266 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.266 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.193 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.205 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.214 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:05.214 > git config core.sparsecheckout # timeout=10 00:00:05.225 > git read-tree -mu HEAD # timeout=10 00:00:05.240 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:05.258 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:05.258 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:05.357 [Pipeline] Start of Pipeline 00:00:05.370 [Pipeline] library 00:00:05.371 Loading library shm_lib@master 00:00:05.371 Library shm_lib@master is cached. Copying from home. 00:00:05.386 [Pipeline] node 00:00:05.395 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.396 [Pipeline] { 00:00:05.407 [Pipeline] catchError 00:00:05.409 [Pipeline] { 00:00:05.422 [Pipeline] wrap 00:00:05.431 [Pipeline] { 00:00:05.439 [Pipeline] stage 00:00:05.441 [Pipeline] { (Prologue) 00:00:05.619 [Pipeline] sh 00:00:05.915 + logger -p user.info -t JENKINS-CI 00:00:05.934 [Pipeline] echo 00:00:05.936 Node: CYP12 00:00:05.944 [Pipeline] sh 00:00:06.252 [Pipeline] setCustomBuildProperty 00:00:06.264 [Pipeline] echo 00:00:06.266 Cleanup processes 00:00:06.271 [Pipeline] sh 00:00:06.563 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.563 1349807 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.577 [Pipeline] sh 00:00:06.869 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.869 ++ grep -v 'sudo pgrep' 00:00:06.869 ++ awk '{print $1}' 00:00:06.869 + sudo kill -9 00:00:06.869 + true 00:00:06.885 [Pipeline] cleanWs 00:00:06.896 [WS-CLEANUP] Deleting project workspace... 00:00:06.896 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.904 [WS-CLEANUP] done 00:00:06.908 [Pipeline] setCustomBuildProperty 00:00:06.920 [Pipeline] sh 00:00:07.208 + sudo git config --global --replace-all safe.directory '*' 00:00:07.302 [Pipeline] httpRequest 00:00:08.074 [Pipeline] echo 00:00:08.075 Sorcerer 10.211.164.101 is alive 00:00:08.084 [Pipeline] retry 00:00:08.086 [Pipeline] { 00:00:08.100 [Pipeline] httpRequest 00:00:08.104 HttpMethod: GET 00:00:08.104 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.105 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.119 Response Code: HTTP/1.1 200 OK 00:00:08.119 Success: Status code 200 is in the accepted range: 200,404 00:00:08.120 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:10.740 [Pipeline] } 00:00:10.755 [Pipeline] // retry 00:00:10.762 [Pipeline] sh 00:00:11.051 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:11.068 [Pipeline] httpRequest 00:00:11.498 [Pipeline] echo 00:00:11.500 Sorcerer 10.211.164.101 is alive 00:00:11.511 [Pipeline] retry 00:00:11.513 [Pipeline] { 00:00:11.526 [Pipeline] httpRequest 00:00:11.531 HttpMethod: GET 00:00:11.532 URL: http://10.211.164.101/packages/spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:00:11.532 Sending request to url: http://10.211.164.101/packages/spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:00:11.550 Response Code: HTTP/1.1 200 OK 00:00:11.551 Success: Status code 200 is in the accepted range: 200,404 00:00:11.551 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:01:56.523 [Pipeline] } 00:01:56.540 [Pipeline] // retry 00:01:56.548 [Pipeline] sh 00:01:56.839 + tar --no-same-owner -xf spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:02:00.159 [Pipeline] sh 00:02:00.450 + git -C spdk log --oneline -n5 00:02:00.450 bbce7a874 event: move struct spdk_lw_thread to internal header 00:02:00.450 5031f0f3b module/raid: Assign bdev_io buffers to raid_io 00:02:00.450 dc3ea9d27 bdevperf: Allocate an md buffer for verify op 00:02:00.450 0ce363beb spdk_log: introduce spdk_log_ext API 00:02:00.450 412fced1b bdev/compress: unmap support. 00:02:00.470 [Pipeline] withCredentials 00:02:00.482 > git --version # timeout=10 00:02:00.496 > git --version # 'git version 2.39.2' 00:02:00.525 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:02:00.527 [Pipeline] { 00:02:00.536 [Pipeline] retry 00:02:00.538 [Pipeline] { 00:02:00.553 [Pipeline] sh 00:02:01.153 + git ls-remote http://dpdk.org/git/dpdk main 00:02:01.429 [Pipeline] } 00:02:01.447 [Pipeline] // retry 00:02:01.452 [Pipeline] } 00:02:01.470 [Pipeline] // withCredentials 00:02:01.479 [Pipeline] httpRequest 00:02:02.168 [Pipeline] echo 00:02:02.170 Sorcerer 10.211.164.101 is alive 00:02:02.180 [Pipeline] retry 00:02:02.182 [Pipeline] { 00:02:02.196 [Pipeline] httpRequest 00:02:02.201 HttpMethod: GET 00:02:02.201 URL: http://10.211.164.101/packages/dpdk_98613d32e3dac58d685f4f236cf8cc9733abaaf3.tar.gz 00:02:02.202 Sending request to url: http://10.211.164.101/packages/dpdk_98613d32e3dac58d685f4f236cf8cc9733abaaf3.tar.gz 00:02:02.206 Response Code: HTTP/1.1 200 OK 00:02:02.206 Success: Status code 200 is in the accepted range: 200,404 00:02:02.207 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_98613d32e3dac58d685f4f236cf8cc9733abaaf3.tar.gz 00:02:07.346 [Pipeline] } 00:02:07.367 [Pipeline] // retry 00:02:07.376 [Pipeline] sh 00:02:07.666 + tar --no-same-owner -xf dpdk_98613d32e3dac58d685f4f236cf8cc9733abaaf3.tar.gz 00:02:09.597 [Pipeline] sh 00:02:09.888 + git -C dpdk log --oneline -n5 00:02:09.888 98613d32e3 net/cnxk: support vector Tx multi-segment for CN20K 00:02:09.888 e829e60c69 net/cnxk: support Tx burst vector for CN20K 00:02:09.888 e634a59477 net/cnxk: support Tx multi-segment in CN20K 00:02:09.888 006c1daa89 net/cnxk: support Tx burst scalar for CN20K 00:02:09.888 9a8b99cf88 net/cnxk: support Rx burst vector for CN20K 00:02:09.900 [Pipeline] } 00:02:09.917 [Pipeline] // stage 00:02:09.925 [Pipeline] stage 00:02:09.928 [Pipeline] { (Prepare) 00:02:09.947 [Pipeline] writeFile 00:02:09.962 [Pipeline] sh 00:02:10.252 + logger -p user.info -t JENKINS-CI 00:02:10.266 [Pipeline] sh 00:02:10.556 + logger -p user.info -t JENKINS-CI 00:02:10.569 [Pipeline] sh 00:02:10.859 + cat autorun-spdk.conf 00:02:10.859 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.859 SPDK_TEST_NVMF=1 00:02:10.859 SPDK_TEST_NVME_CLI=1 00:02:10.859 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.859 SPDK_TEST_NVMF_NICS=e810 00:02:10.859 SPDK_TEST_VFIOUSER=1 00:02:10.859 SPDK_RUN_UBSAN=1 00:02:10.859 NET_TYPE=phy 00:02:10.859 SPDK_TEST_NATIVE_DPDK=main 00:02:10.859 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:10.868 RUN_NIGHTLY=1 00:02:10.872 [Pipeline] readFile 00:02:10.898 [Pipeline] withEnv 00:02:10.900 [Pipeline] { 00:02:10.913 [Pipeline] sh 00:02:11.204 + set -ex 00:02:11.204 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:11.204 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:11.204 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.204 ++ SPDK_TEST_NVMF=1 00:02:11.204 ++ SPDK_TEST_NVME_CLI=1 00:02:11.204 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:11.204 ++ SPDK_TEST_NVMF_NICS=e810 00:02:11.204 ++ SPDK_TEST_VFIOUSER=1 00:02:11.204 ++ SPDK_RUN_UBSAN=1 00:02:11.204 ++ NET_TYPE=phy 00:02:11.204 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:11.204 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:11.204 ++ RUN_NIGHTLY=1 00:02:11.204 + case $SPDK_TEST_NVMF_NICS in 00:02:11.204 + DRIVERS=ice 00:02:11.204 + [[ tcp == \r\d\m\a ]] 00:02:11.204 + [[ -n ice ]] 00:02:11.204 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:11.204 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:11.204 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:11.204 rmmod: ERROR: Module irdma is not currently loaded 00:02:11.204 rmmod: ERROR: Module i40iw is not currently loaded 00:02:11.204 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:11.204 + true 00:02:11.204 + for D in $DRIVERS 00:02:11.204 + sudo modprobe ice 00:02:11.204 + exit 0 00:02:11.215 [Pipeline] } 00:02:11.229 [Pipeline] // withEnv 00:02:11.234 [Pipeline] } 00:02:11.248 [Pipeline] // stage 00:02:11.258 [Pipeline] catchError 00:02:11.260 [Pipeline] { 00:02:11.274 [Pipeline] timeout 00:02:11.274 Timeout set to expire in 1 hr 0 min 00:02:11.276 [Pipeline] { 00:02:11.290 [Pipeline] stage 00:02:11.292 [Pipeline] { (Tests) 00:02:11.307 [Pipeline] sh 00:02:11.598 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.598 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.598 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.599 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:11.599 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.599 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:11.599 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:11.599 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:11.599 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:11.599 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:11.599 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:11.599 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:11.599 + source /etc/os-release 00:02:11.599 ++ NAME='Fedora Linux' 00:02:11.599 ++ VERSION='39 (Cloud Edition)' 00:02:11.599 ++ ID=fedora 00:02:11.599 ++ VERSION_ID=39 00:02:11.599 ++ VERSION_CODENAME= 00:02:11.599 ++ PLATFORM_ID=platform:f39 00:02:11.599 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:11.599 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:11.599 ++ LOGO=fedora-logo-icon 00:02:11.599 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:11.599 ++ HOME_URL=https://fedoraproject.org/ 00:02:11.599 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:11.599 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:11.599 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:11.599 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:11.599 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:11.599 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:11.599 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:11.599 ++ SUPPORT_END=2024-11-12 00:02:11.599 ++ VARIANT='Cloud Edition' 00:02:11.599 ++ VARIANT_ID=cloud 00:02:11.599 + uname -a 00:02:11.599 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:11.599 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:14.906 Hugepages 00:02:14.906 node hugesize free / total 00:02:14.906 node0 1048576kB 0 / 0 00:02:14.906 node0 2048kB 0 / 0 00:02:14.906 node1 1048576kB 0 / 0 00:02:14.906 node1 2048kB 0 / 0 00:02:14.906 00:02:14.906 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:14.906 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:14.906 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:14.906 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:14.906 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:14.906 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:14.906 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:14.906 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:14.906 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:14.906 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:14.906 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:14.906 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:14.906 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:14.906 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:14.906 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:14.906 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:14.906 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:14.906 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:14.906 + rm -f /tmp/spdk-ld-path 00:02:14.906 + source autorun-spdk.conf 00:02:14.906 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.906 ++ SPDK_TEST_NVMF=1 00:02:14.906 ++ SPDK_TEST_NVME_CLI=1 00:02:14.906 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:14.906 ++ SPDK_TEST_NVMF_NICS=e810 00:02:14.906 ++ SPDK_TEST_VFIOUSER=1 00:02:14.906 ++ SPDK_RUN_UBSAN=1 00:02:14.906 ++ NET_TYPE=phy 00:02:14.906 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:14.906 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:14.906 ++ RUN_NIGHTLY=1 00:02:14.906 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:14.906 + [[ -n '' ]] 00:02:14.906 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:14.906 + for M in /var/spdk/build-*-manifest.txt 00:02:14.906 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:14.906 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:14.906 + for M in /var/spdk/build-*-manifest.txt 00:02:14.906 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:14.906 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:14.906 + for M in /var/spdk/build-*-manifest.txt 00:02:14.906 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:14.906 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:14.906 ++ uname 00:02:14.906 + [[ Linux == \L\i\n\u\x ]] 00:02:14.906 + sudo dmesg -T 00:02:14.906 + sudo dmesg --clear 00:02:14.906 + dmesg_pid=1351422 00:02:14.906 + [[ Fedora Linux == FreeBSD ]] 00:02:14.906 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:14.906 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:14.906 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:14.906 + [[ -x /usr/src/fio-static/fio ]] 00:02:14.906 + export FIO_BIN=/usr/src/fio-static/fio 00:02:14.906 + FIO_BIN=/usr/src/fio-static/fio 00:02:14.906 + sudo dmesg -Tw 00:02:14.906 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:14.906 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:14.906 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:14.906 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:14.906 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:14.906 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:14.906 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:14.906 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:14.906 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:15.168 Test configuration: 00:02:15.168 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.168 SPDK_TEST_NVMF=1 00:02:15.168 SPDK_TEST_NVME_CLI=1 00:02:15.168 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:15.168 SPDK_TEST_NVMF_NICS=e810 00:02:15.168 SPDK_TEST_VFIOUSER=1 00:02:15.168 SPDK_RUN_UBSAN=1 00:02:15.168 NET_TYPE=phy 00:02:15.168 SPDK_TEST_NATIVE_DPDK=main 00:02:15.168 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:15.168 RUN_NIGHTLY=1 13:58:18 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:15.168 13:58:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:15.168 13:58:18 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:15.168 13:58:18 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:15.168 13:58:18 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:15.168 13:58:18 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:15.168 13:58:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.168 13:58:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.168 13:58:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.168 13:58:18 -- paths/export.sh@5 -- $ export PATH 00:02:15.168 13:58:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.168 13:58:18 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:15.168 13:58:18 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:15.168 13:58:18 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728820698.XXXXXX 00:02:15.168 13:58:18 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728820698.ngnGX2 00:02:15.168 13:58:18 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:15.168 13:58:18 -- common/autobuild_common.sh@492 -- $ '[' -n main ']' 00:02:15.168 13:58:18 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:15.168 13:58:18 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:15.169 13:58:18 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:15.169 13:58:18 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:15.169 13:58:18 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:15.169 13:58:18 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:15.169 13:58:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.169 13:58:18 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:15.169 13:58:18 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:15.169 13:58:18 -- pm/common@17 -- $ local monitor 00:02:15.169 13:58:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.169 13:58:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.169 13:58:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.169 13:58:18 -- pm/common@21 -- $ date +%s 00:02:15.169 13:58:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.169 13:58:18 -- pm/common@21 -- $ date +%s 00:02:15.169 13:58:18 -- pm/common@25 -- $ sleep 1 00:02:15.169 13:58:18 -- pm/common@21 -- $ date +%s 00:02:15.169 13:58:18 -- pm/common@21 -- $ date +%s 00:02:15.169 13:58:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728820698 00:02:15.169 13:58:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728820698 00:02:15.169 13:58:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728820698 00:02:15.169 13:58:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728820698 00:02:15.169 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728820698_collect-cpu-load.pm.log 00:02:15.169 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728820698_collect-vmstat.pm.log 00:02:15.169 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728820698_collect-cpu-temp.pm.log 00:02:15.169 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728820698_collect-bmc-pm.bmc.pm.log 00:02:16.113 13:58:19 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:16.113 13:58:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:16.113 13:58:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:16.113 13:58:19 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:16.113 13:58:19 -- spdk/autobuild.sh@16 -- $ date -u 00:02:16.113 Sun Oct 13 11:58:19 AM UTC 2024 00:02:16.113 13:58:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:16.113 v25.01-pre-55-gbbce7a874 00:02:16.113 13:58:19 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:16.113 13:58:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:16.113 13:58:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:16.113 13:58:19 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:16.113 13:58:19 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:16.113 13:58:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.113 ************************************ 00:02:16.113 START TEST ubsan 00:02:16.113 ************************************ 00:02:16.377 13:58:19 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:16.377 using ubsan 00:02:16.377 00:02:16.377 real 0m0.001s 00:02:16.377 user 0m0.000s 00:02:16.377 sys 0m0.001s 00:02:16.377 13:58:19 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:16.377 13:58:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:16.377 ************************************ 00:02:16.377 END TEST ubsan 00:02:16.377 ************************************ 00:02:16.377 13:58:19 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:16.377 13:58:19 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:16.377 13:58:19 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:16.377 13:58:19 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:16.377 13:58:19 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:16.377 13:58:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.377 ************************************ 00:02:16.377 START TEST build_native_dpdk 00:02:16.377 ************************************ 00:02:16.377 13:58:19 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:16.377 98613d32e3 net/cnxk: support vector Tx multi-segment for CN20K 00:02:16.377 e829e60c69 net/cnxk: support Tx burst vector for CN20K 00:02:16.377 e634a59477 net/cnxk: support Tx multi-segment in CN20K 00:02:16.377 006c1daa89 net/cnxk: support Tx burst scalar for CN20K 00:02:16.377 9a8b99cf88 net/cnxk: support Rx burst vector for CN20K 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc0 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.11.0-rc0 21.11.0 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc0 '<' 21.11.0 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:16.377 patching file config/rte_config.h 00:02:16.377 Hunk #1 succeeded at 71 (offset 12 lines). 00:02:16.377 13:58:19 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc0 24.07.0 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc0 '<' 24.07.0 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:16.377 13:58:19 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:16.377 13:58:20 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:16.377 13:58:20 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:16.377 13:58:20 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:16.378 13:58:20 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 24.11.0-rc0 24.07.0 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc0 '>=' 24.07.0 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:16.378 13:58:20 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:02:16.378 13:58:20 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:16.378 patching file drivers/bus/pci/linux/pci_uio.c 00:02:16.378 13:58:20 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:16.378 13:58:20 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:16.378 13:58:20 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:16.378 13:58:20 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:16.378 13:58:20 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:21.678 The Meson build system 00:02:21.678 Version: 1.5.0 00:02:21.678 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:21.678 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:21.678 Build type: native build 00:02:21.678 Program cat found: YES (/usr/bin/cat) 00:02:21.678 Project name: DPDK 00:02:21.678 Project version: 24.11.0-rc0 00:02:21.678 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:21.678 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:21.678 Host machine cpu family: x86_64 00:02:21.678 Host machine cpu: x86_64 00:02:21.678 Message: ## Building in Developer Mode ## 00:02:21.678 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:21.678 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:21.678 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:21.678 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:02:21.678 Program cat found: YES (/usr/bin/cat) 00:02:21.678 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:21.678 Compiler for C supports arguments -march=native: YES 00:02:21.678 Checking for size of "void *" : 8 00:02:21.678 Checking for size of "void *" : 8 (cached) 00:02:21.678 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:21.678 Library m found: YES 00:02:21.678 Library numa found: YES 00:02:21.678 Has header "numaif.h" : YES 00:02:21.678 Library fdt found: NO 00:02:21.678 Library execinfo found: NO 00:02:21.678 Has header "execinfo.h" : YES 00:02:21.678 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:21.678 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:21.678 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:21.678 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:21.678 Run-time dependency openssl found: YES 3.1.1 00:02:21.678 Run-time dependency libpcap found: YES 1.10.4 00:02:21.678 Has header "pcap.h" with dependency libpcap: YES 00:02:21.678 Compiler for C supports arguments -Wcast-qual: YES 00:02:21.678 Compiler for C supports arguments -Wdeprecated: YES 00:02:21.678 Compiler for C supports arguments -Wformat: YES 00:02:21.678 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:21.678 Compiler for C supports arguments -Wformat-security: NO 00:02:21.678 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:21.678 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:21.678 Compiler for C supports arguments -Wnested-externs: YES 00:02:21.678 Compiler for C supports arguments -Wold-style-definition: YES 00:02:21.678 Compiler for C supports arguments -Wpointer-arith: YES 00:02:21.678 Compiler for C supports arguments -Wsign-compare: YES 00:02:21.678 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:21.678 Compiler for C supports arguments -Wundef: YES 00:02:21.678 Compiler for C supports arguments -Wwrite-strings: YES 00:02:21.678 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:21.678 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:21.678 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:21.678 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:21.678 Program objdump found: YES (/usr/bin/objdump) 00:02:21.678 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512dq -mavx512bw: YES 00:02:21.678 Checking if "AVX512 checking" compiles: YES 00:02:21.678 Fetching value of define "__AVX512F__" : 1 00:02:21.678 Fetching value of define "__AVX512BW__" : 1 00:02:21.678 Fetching value of define "__AVX512DQ__" : 1 00:02:21.678 Fetching value of define "__AVX512VL__" : 1 00:02:21.678 Fetching value of define "__SSE4_2__" : 1 00:02:21.678 Fetching value of define "__AES__" : 1 00:02:21.678 Fetching value of define "__AVX__" : 1 00:02:21.678 Fetching value of define "__AVX2__" : 1 00:02:21.678 Fetching value of define "__AVX512BW__" : 1 00:02:21.678 Fetching value of define "__AVX512CD__" : 1 00:02:21.678 Fetching value of define "__AVX512DQ__" : 1 00:02:21.678 Fetching value of define "__AVX512F__" : 1 00:02:21.678 Fetching value of define "__AVX512VL__" : 1 00:02:21.678 Fetching value of define "__PCLMUL__" : 1 00:02:21.678 Fetching value of define "__RDRND__" : 1 00:02:21.678 Fetching value of define "__RDSEED__" : 1 00:02:21.678 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:21.678 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:21.678 Message: lib/log: Defining dependency "log" 00:02:21.678 Message: lib/kvargs: Defining dependency "kvargs" 00:02:21.678 Message: lib/argparse: Defining dependency "argparse" 00:02:21.678 Message: lib/telemetry: Defining dependency "telemetry" 00:02:21.678 Checking for function "getentropy" : NO 00:02:21.678 Message: lib/eal: Defining dependency "eal" 00:02:21.678 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:02:21.678 Message: lib/ring: Defining dependency "ring" 00:02:21.678 Message: lib/rcu: Defining dependency "rcu" 00:02:21.678 Message: lib/mempool: Defining dependency "mempool" 00:02:21.678 Message: lib/mbuf: Defining dependency "mbuf" 00:02:21.678 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:21.678 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:21.678 Compiler for C supports arguments -mpclmul: YES 00:02:21.678 Compiler for C supports arguments -maes: YES 00:02:21.678 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:21.678 Message: lib/net: Defining dependency "net" 00:02:21.678 Message: lib/meter: Defining dependency "meter" 00:02:21.678 Message: lib/ethdev: Defining dependency "ethdev" 00:02:21.678 Message: lib/pci: Defining dependency "pci" 00:02:21.678 Message: lib/cmdline: Defining dependency "cmdline" 00:02:21.678 Message: lib/metrics: Defining dependency "metrics" 00:02:21.678 Message: lib/hash: Defining dependency "hash" 00:02:21.678 Message: lib/timer: Defining dependency "timer" 00:02:21.678 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.678 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:21.678 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:21.678 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:21.678 Message: lib/acl: Defining dependency "acl" 00:02:21.678 Message: lib/bbdev: Defining dependency "bbdev" 00:02:21.678 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:21.678 Run-time dependency libelf found: YES 0.191 00:02:21.678 Message: lib/bpf: Defining dependency "bpf" 00:02:21.678 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:21.678 Message: lib/compressdev: Defining dependency "compressdev" 00:02:21.678 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:21.678 Message: lib/distributor: Defining dependency "distributor" 00:02:21.678 Message: lib/dmadev: Defining dependency "dmadev" 00:02:21.678 Message: lib/efd: Defining dependency "efd" 00:02:21.678 Message: lib/eventdev: Defining dependency "eventdev" 00:02:21.678 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:21.678 Message: lib/gpudev: Defining dependency "gpudev" 00:02:21.678 Message: lib/gro: Defining dependency "gro" 00:02:21.678 Message: lib/gso: Defining dependency "gso" 00:02:21.678 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:21.678 Message: lib/jobstats: Defining dependency "jobstats" 00:02:21.678 Message: lib/latencystats: Defining dependency "latencystats" 00:02:21.678 Message: lib/lpm: Defining dependency "lpm" 00:02:21.678 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.678 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:21.678 Fetching value of define "__AVX512IFMA__" : 1 00:02:21.678 Message: lib/member: Defining dependency "member" 00:02:21.678 Message: lib/pcapng: Defining dependency "pcapng" 00:02:21.678 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:21.678 Message: lib/power: Defining dependency "power" 00:02:21.678 Message: lib/rawdev: Defining dependency "rawdev" 00:02:21.678 Message: lib/regexdev: Defining dependency "regexdev" 00:02:21.678 Message: lib/mldev: Defining dependency "mldev" 00:02:21.678 Message: lib/rib: Defining dependency "rib" 00:02:21.678 Message: lib/reorder: Defining dependency "reorder" 00:02:21.678 Message: lib/sched: Defining dependency "sched" 00:02:21.678 Message: lib/security: Defining dependency "security" 00:02:21.678 Message: lib/stack: Defining dependency "stack" 00:02:21.678 Has header "linux/userfaultfd.h" : YES 00:02:21.678 Has header "linux/vduse.h" : YES 00:02:21.678 Message: lib/vhost: Defining dependency "vhost" 00:02:21.678 Message: lib/ipsec: Defining dependency "ipsec" 00:02:21.678 Message: lib/pdcp: Defining dependency "pdcp" 00:02:21.679 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.679 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:21.679 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:21.679 Message: lib/fib: Defining dependency "fib" 00:02:21.679 Message: lib/port: Defining dependency "port" 00:02:21.679 Message: lib/pdump: Defining dependency "pdump" 00:02:21.679 Message: lib/table: Defining dependency "table" 00:02:21.679 Message: lib/pipeline: Defining dependency "pipeline" 00:02:21.679 Message: lib/graph: Defining dependency "graph" 00:02:21.679 Message: lib/node: Defining dependency "node" 00:02:21.679 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:21.679 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:21.679 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:21.679 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:21.679 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:21.679 Compiler for C supports arguments -Wno-unused-value: YES 00:02:21.679 Compiler for C supports arguments -Wno-format: YES 00:02:21.679 Compiler for C supports arguments -Wno-format-security: YES 00:02:23.070 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:23.070 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:23.070 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:23.070 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:23.070 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:23.070 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:23.070 Has header "sys/epoll.h" : YES 00:02:23.070 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:23.070 Configuring doxy-api-html.conf using configuration 00:02:23.070 Configuring doxy-api-man.conf using configuration 00:02:23.070 Program mandb found: YES (/usr/bin/mandb) 00:02:23.070 Program sphinx-build found: NO 00:02:23.070 Configuring rte_build_config.h using configuration 00:02:23.070 Message: 00:02:23.070 ================= 00:02:23.070 Applications Enabled 00:02:23.070 ================= 00:02:23.070 00:02:23.070 apps: 00:02:23.070 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:23.070 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:23.070 test-pmd, test-regex, test-sad, test-security-perf, 00:02:23.070 00:02:23.070 Message: 00:02:23.070 ================= 00:02:23.070 Libraries Enabled 00:02:23.070 ================= 00:02:23.070 00:02:23.070 libs: 00:02:23.070 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:02:23.070 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:02:23.070 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:02:23.070 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:02:23.070 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:02:23.070 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:02:23.070 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:02:23.070 graph, node, 00:02:23.070 00:02:23.070 Message: 00:02:23.070 =============== 00:02:23.070 Drivers Enabled 00:02:23.070 =============== 00:02:23.070 00:02:23.070 common: 00:02:23.070 00:02:23.070 bus: 00:02:23.070 pci, vdev, 00:02:23.070 mempool: 00:02:23.070 ring, 00:02:23.070 dma: 00:02:23.070 00:02:23.070 net: 00:02:23.070 i40e, 00:02:23.070 raw: 00:02:23.070 00:02:23.070 crypto: 00:02:23.070 00:02:23.070 compress: 00:02:23.070 00:02:23.070 regex: 00:02:23.070 00:02:23.070 ml: 00:02:23.070 00:02:23.070 vdpa: 00:02:23.070 00:02:23.070 event: 00:02:23.070 00:02:23.070 baseband: 00:02:23.070 00:02:23.070 gpu: 00:02:23.070 00:02:23.070 00:02:23.070 Message: 00:02:23.070 ================= 00:02:23.070 Content Skipped 00:02:23.070 ================= 00:02:23.070 00:02:23.070 apps: 00:02:23.070 00:02:23.070 libs: 00:02:23.070 00:02:23.070 drivers: 00:02:23.070 common/cpt: not in enabled drivers build config 00:02:23.070 common/dpaax: not in enabled drivers build config 00:02:23.070 common/iavf: not in enabled drivers build config 00:02:23.070 common/idpf: not in enabled drivers build config 00:02:23.070 common/ionic: not in enabled drivers build config 00:02:23.070 common/mvep: not in enabled drivers build config 00:02:23.070 common/octeontx: not in enabled drivers build config 00:02:23.070 bus/auxiliary: not in enabled drivers build config 00:02:23.070 bus/cdx: not in enabled drivers build config 00:02:23.070 bus/dpaa: not in enabled drivers build config 00:02:23.070 bus/fslmc: not in enabled drivers build config 00:02:23.070 bus/ifpga: not in enabled drivers build config 00:02:23.070 bus/platform: not in enabled drivers build config 00:02:23.070 bus/uacce: not in enabled drivers build config 00:02:23.070 bus/vmbus: not in enabled drivers build config 00:02:23.070 common/cnxk: not in enabled drivers build config 00:02:23.070 common/mlx5: not in enabled drivers build config 00:02:23.070 common/nfp: not in enabled drivers build config 00:02:23.070 common/nitrox: not in enabled drivers build config 00:02:23.070 common/qat: not in enabled drivers build config 00:02:23.070 common/sfc_efx: not in enabled drivers build config 00:02:23.070 mempool/bucket: not in enabled drivers build config 00:02:23.070 mempool/cnxk: not in enabled drivers build config 00:02:23.070 mempool/dpaa: not in enabled drivers build config 00:02:23.070 mempool/dpaa2: not in enabled drivers build config 00:02:23.070 mempool/octeontx: not in enabled drivers build config 00:02:23.070 mempool/stack: not in enabled drivers build config 00:02:23.070 dma/cnxk: not in enabled drivers build config 00:02:23.070 dma/dpaa: not in enabled drivers build config 00:02:23.070 dma/dpaa2: not in enabled drivers build config 00:02:23.070 dma/hisilicon: not in enabled drivers build config 00:02:23.070 dma/idxd: not in enabled drivers build config 00:02:23.070 dma/ioat: not in enabled drivers build config 00:02:23.070 dma/odm: not in enabled drivers build config 00:02:23.070 dma/skeleton: not in enabled drivers build config 00:02:23.070 net/af_packet: not in enabled drivers build config 00:02:23.070 net/af_xdp: not in enabled drivers build config 00:02:23.070 net/ark: not in enabled drivers build config 00:02:23.070 net/atlantic: not in enabled drivers build config 00:02:23.070 net/avp: not in enabled drivers build config 00:02:23.070 net/axgbe: not in enabled drivers build config 00:02:23.070 net/bnx2x: not in enabled drivers build config 00:02:23.070 net/bnxt: not in enabled drivers build config 00:02:23.070 net/bonding: not in enabled drivers build config 00:02:23.070 net/cnxk: not in enabled drivers build config 00:02:23.070 net/cpfl: not in enabled drivers build config 00:02:23.070 net/cxgbe: not in enabled drivers build config 00:02:23.070 net/dpaa: not in enabled drivers build config 00:02:23.070 net/dpaa2: not in enabled drivers build config 00:02:23.070 net/e1000: not in enabled drivers build config 00:02:23.070 net/ena: not in enabled drivers build config 00:02:23.070 net/enetc: not in enabled drivers build config 00:02:23.070 net/enetfec: not in enabled drivers build config 00:02:23.070 net/enic: not in enabled drivers build config 00:02:23.070 net/failsafe: not in enabled drivers build config 00:02:23.070 net/fm10k: not in enabled drivers build config 00:02:23.070 net/gve: not in enabled drivers build config 00:02:23.070 net/hinic: not in enabled drivers build config 00:02:23.070 net/hns3: not in enabled drivers build config 00:02:23.070 net/iavf: not in enabled drivers build config 00:02:23.070 net/ice: not in enabled drivers build config 00:02:23.070 net/idpf: not in enabled drivers build config 00:02:23.070 net/igc: not in enabled drivers build config 00:02:23.070 net/ionic: not in enabled drivers build config 00:02:23.070 net/ipn3ke: not in enabled drivers build config 00:02:23.070 net/ixgbe: not in enabled drivers build config 00:02:23.070 net/mana: not in enabled drivers build config 00:02:23.070 net/memif: not in enabled drivers build config 00:02:23.070 net/mlx4: not in enabled drivers build config 00:02:23.070 net/mlx5: not in enabled drivers build config 00:02:23.070 net/mvneta: not in enabled drivers build config 00:02:23.070 net/mvpp2: not in enabled drivers build config 00:02:23.070 net/netvsc: not in enabled drivers build config 00:02:23.070 net/nfb: not in enabled drivers build config 00:02:23.070 net/nfp: not in enabled drivers build config 00:02:23.070 net/ngbe: not in enabled drivers build config 00:02:23.070 net/ntnic: not in enabled drivers build config 00:02:23.070 net/null: not in enabled drivers build config 00:02:23.070 net/octeontx: not in enabled drivers build config 00:02:23.070 net/octeon_ep: not in enabled drivers build config 00:02:23.070 net/pcap: not in enabled drivers build config 00:02:23.070 net/pfe: not in enabled drivers build config 00:02:23.070 net/qede: not in enabled drivers build config 00:02:23.070 net/ring: not in enabled drivers build config 00:02:23.070 net/sfc: not in enabled drivers build config 00:02:23.070 net/softnic: not in enabled drivers build config 00:02:23.070 net/tap: not in enabled drivers build config 00:02:23.070 net/thunderx: not in enabled drivers build config 00:02:23.070 net/txgbe: not in enabled drivers build config 00:02:23.070 net/vdev_netvsc: not in enabled drivers build config 00:02:23.071 net/vhost: not in enabled drivers build config 00:02:23.071 net/virtio: not in enabled drivers build config 00:02:23.071 net/vmxnet3: not in enabled drivers build config 00:02:23.071 raw/cnxk_bphy: not in enabled drivers build config 00:02:23.071 raw/cnxk_gpio: not in enabled drivers build config 00:02:23.071 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:23.071 raw/ifpga: not in enabled drivers build config 00:02:23.071 raw/ntb: not in enabled drivers build config 00:02:23.071 raw/skeleton: not in enabled drivers build config 00:02:23.071 crypto/armv8: not in enabled drivers build config 00:02:23.071 crypto/bcmfs: not in enabled drivers build config 00:02:23.071 crypto/caam_jr: not in enabled drivers build config 00:02:23.071 crypto/ccp: not in enabled drivers build config 00:02:23.071 crypto/cnxk: not in enabled drivers build config 00:02:23.071 crypto/dpaa_sec: not in enabled drivers build config 00:02:23.071 crypto/dpaa2_sec: not in enabled drivers build config 00:02:23.071 crypto/ionic: not in enabled drivers build config 00:02:23.071 crypto/ipsec_mb: not in enabled drivers build config 00:02:23.071 crypto/mlx5: not in enabled drivers build config 00:02:23.071 crypto/mvsam: not in enabled drivers build config 00:02:23.071 crypto/nitrox: not in enabled drivers build config 00:02:23.071 crypto/null: not in enabled drivers build config 00:02:23.071 crypto/octeontx: not in enabled drivers build config 00:02:23.071 crypto/openssl: not in enabled drivers build config 00:02:23.071 crypto/scheduler: not in enabled drivers build config 00:02:23.071 crypto/uadk: not in enabled drivers build config 00:02:23.071 crypto/virtio: not in enabled drivers build config 00:02:23.071 compress/isal: not in enabled drivers build config 00:02:23.071 compress/mlx5: not in enabled drivers build config 00:02:23.071 compress/nitrox: not in enabled drivers build config 00:02:23.071 compress/octeontx: not in enabled drivers build config 00:02:23.071 compress/uadk: not in enabled drivers build config 00:02:23.071 compress/zlib: not in enabled drivers build config 00:02:23.071 regex/mlx5: not in enabled drivers build config 00:02:23.071 regex/cn9k: not in enabled drivers build config 00:02:23.071 ml/cnxk: not in enabled drivers build config 00:02:23.071 vdpa/ifc: not in enabled drivers build config 00:02:23.071 vdpa/mlx5: not in enabled drivers build config 00:02:23.071 vdpa/nfp: not in enabled drivers build config 00:02:23.071 vdpa/sfc: not in enabled drivers build config 00:02:23.071 event/cnxk: not in enabled drivers build config 00:02:23.071 event/dlb2: not in enabled drivers build config 00:02:23.071 event/dpaa: not in enabled drivers build config 00:02:23.071 event/dpaa2: not in enabled drivers build config 00:02:23.071 event/dsw: not in enabled drivers build config 00:02:23.071 event/opdl: not in enabled drivers build config 00:02:23.071 event/skeleton: not in enabled drivers build config 00:02:23.071 event/sw: not in enabled drivers build config 00:02:23.071 event/octeontx: not in enabled drivers build config 00:02:23.071 baseband/acc: not in enabled drivers build config 00:02:23.071 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:23.071 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:23.071 baseband/la12xx: not in enabled drivers build config 00:02:23.071 baseband/null: not in enabled drivers build config 00:02:23.071 baseband/turbo_sw: not in enabled drivers build config 00:02:23.071 gpu/cuda: not in enabled drivers build config 00:02:23.071 00:02:23.071 00:02:23.071 Build targets in project: 219 00:02:23.071 00:02:23.071 DPDK 24.11.0-rc0 00:02:23.071 00:02:23.071 User defined options 00:02:23.071 libdir : lib 00:02:23.071 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:23.071 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:23.071 c_link_args : 00:02:23.071 enable_docs : false 00:02:23.071 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:23.071 enable_kmods : false 00:02:23.071 machine : native 00:02:23.071 tests : false 00:02:23.071 00:02:23.071 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:23.071 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:23.335 13:58:26 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 00:02:23.335 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:23.609 [1/718] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:23.609 [2/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:23.609 [3/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:23.609 [4/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:23.609 [5/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:23.609 [6/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:23.609 [7/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:23.609 [8/718] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:23.609 [9/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:23.884 [10/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:23.884 [11/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:23.884 [12/718] Linking static target lib/librte_kvargs.a 00:02:23.884 [13/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:23.884 [14/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:23.884 [15/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:23.884 [16/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:23.884 [17/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:23.884 [18/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:23.884 [19/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:23.884 [20/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:24.149 [21/718] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:24.149 [22/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:24.149 [23/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:24.149 [24/718] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:24.149 [25/718] Linking static target lib/librte_log.a 00:02:24.149 [26/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:24.149 [27/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:24.149 [28/718] Linking static target lib/librte_pci.a 00:02:24.149 [29/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:24.149 [30/718] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:24.149 [31/718] Linking static target lib/librte_argparse.a 00:02:24.149 [32/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:24.408 [33/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:24.408 [34/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:24.408 [35/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:24.408 [36/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:24.408 [37/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:24.408 [38/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:24.678 [39/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:24.678 [40/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:24.678 [41/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:24.678 [42/718] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.678 [43/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:24.678 [44/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:24.678 [45/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:24.678 [46/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:24.678 [47/718] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:24.678 [48/718] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.678 [49/718] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:24.678 [50/718] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:24.678 [51/718] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:24.678 [52/718] Linking static target lib/librte_cfgfile.a 00:02:24.678 [53/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:02:24.678 [54/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:24.678 [55/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:24.678 [56/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:24.678 [57/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:24.678 [58/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:24.678 [59/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:24.678 [60/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:24.678 [61/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:24.678 [62/718] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:24.678 [63/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:24.678 [64/718] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:24.678 [65/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:24.678 [66/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:24.678 [67/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:24.678 [68/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:24.678 [69/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:24.678 [70/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:24.678 [71/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:24.678 [72/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:24.678 [73/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:24.678 [74/718] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:24.678 [75/718] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:24.678 [76/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:24.678 [77/718] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.678 [78/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:24.678 [79/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:24.678 [80/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:24.678 [81/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:24.678 [82/718] Linking static target lib/librte_meter.a 00:02:24.678 [83/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:24.678 [84/718] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:24.678 [85/718] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:24.678 [86/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:24.678 [87/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:24.678 [88/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:24.679 [89/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:24.943 [90/718] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:24.943 [91/718] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:24.943 [92/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:24.943 [93/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:24.943 [94/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:24.943 [95/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:24.943 [96/718] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:24.943 [97/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:24.943 [98/718] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:24.943 [99/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:24.943 [100/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:24.943 [101/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:24.943 [102/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:24.943 [103/718] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:24.943 [104/718] Linking static target lib/librte_ring.a 00:02:24.943 [105/718] Linking static target lib/librte_cmdline.a 00:02:24.943 [106/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:24.943 [107/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:24.943 [108/718] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:24.943 [109/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:24.943 [110/718] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:24.943 [111/718] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:24.943 [112/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:24.943 [113/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:24.943 [114/718] Linking static target lib/librte_metrics.a 00:02:24.943 [115/718] Linking static target lib/librte_bitratestats.a 00:02:24.943 [116/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:24.943 [117/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:24.943 [118/718] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:24.943 [119/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:24.943 [120/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:24.943 [121/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:24.943 [122/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:24.943 [123/718] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:24.943 [124/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:25.205 [125/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:25.205 [126/718] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:25.205 [127/718] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:25.205 [128/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:25.205 [129/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:25.205 [130/718] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:25.205 [131/718] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:25.205 [132/718] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:25.205 [133/718] Linking static target lib/librte_net.a 00:02:25.205 [134/718] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:25.205 [135/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:25.205 [136/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:25.205 [137/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:25.205 [138/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:25.205 [139/718] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:25.205 [140/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:25.205 [141/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:25.205 [142/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:25.205 [143/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:25.205 [144/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:25.205 [145/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:25.205 [146/718] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:02:25.205 [147/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:25.205 [148/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:25.205 [149/718] Linking static target lib/librte_compressdev.a 00:02:25.205 [150/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:25.205 [151/718] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.205 [152/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:25.205 [153/718] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.205 [154/718] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:25.205 [155/718] Linking static target lib/librte_timer.a 00:02:25.205 [156/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:25.205 [157/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:25.205 [158/718] Linking target lib/librte_log.so.25.0 00:02:25.205 [159/718] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.205 [160/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:25.205 [161/718] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.468 [162/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:25.468 [163/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:25.468 [164/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:25.468 [165/718] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.468 [166/718] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:25.468 [167/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:25.468 [168/718] Linking static target lib/librte_dmadev.a 00:02:25.468 [169/718] Linking static target lib/librte_mempool.a 00:02:25.468 [170/718] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:25.468 [171/718] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:25.468 [172/718] Linking static target lib/librte_jobstats.a 00:02:25.468 [173/718] Linking static target lib/librte_bbdev.a 00:02:25.468 [174/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:25.468 [175/718] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:02:25.468 [176/718] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:25.468 [177/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:25.468 [178/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:25.468 [179/718] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:25.468 [180/718] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:25.468 [181/718] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.468 [182/718] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:25.468 [183/718] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:25.468 [184/718] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:25.468 [185/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:25.468 [186/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:25.468 [187/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:25.468 [188/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:25.468 [189/718] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:25.468 [190/718] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.468 [191/718] Linking target lib/librte_kvargs.so.25.0 00:02:25.468 [192/718] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:25.468 [193/718] Linking target lib/librte_argparse.so.25.0 00:02:25.468 [194/718] Linking static target lib/librte_stack.a 00:02:25.468 [195/718] Linking static target lib/librte_distributor.a 00:02:25.468 [196/718] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:25.468 [197/718] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:25.468 [198/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:25.468 [199/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:25.468 [200/718] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:25.468 [201/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:25.468 [202/718] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:25.468 [203/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:25.468 [204/718] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:25.468 [205/718] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:25.732 [206/718] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:02:25.732 [207/718] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:25.732 [208/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:25.732 [209/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:25.732 [210/718] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:25.732 [211/718] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:25.732 [212/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:25.732 [213/718] Linking static target lib/librte_telemetry.a 00:02:25.732 [214/718] Linking static target lib/librte_dispatcher.a 00:02:25.732 [215/718] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:25.732 [216/718] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:25.732 [217/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:25.732 [218/718] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:25.732 [219/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:25.732 [220/718] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:25.732 [221/718] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:25.732 [222/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:25.732 [223/718] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:02:25.732 [224/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:25.732 [225/718] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:25.732 [226/718] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:25.732 [227/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:25.732 [228/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:25.732 [229/718] Linking static target lib/librte_eal.a 00:02:25.732 [230/718] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:25.732 [231/718] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:25.732 [232/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:25.732 [233/718] Linking static target lib/librte_regexdev.a 00:02:25.732 [234/718] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:25.732 [235/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:25.732 [236/718] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:25.732 [237/718] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:25.732 [238/718] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:25.732 [239/718] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:25.732 [240/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:25.732 [241/718] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:25.732 [242/718] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:25.732 [243/718] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.732 [244/718] Linking static target lib/librte_rawdev.a 00:02:25.732 [245/718] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:25.732 [246/718] Linking static target lib/librte_gro.a 00:02:25.732 [247/718] Linking static target lib/librte_mldev.a 00:02:25.732 [248/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:25.732 [249/718] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:25.732 [250/718] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:25.732 [251/718] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:25.732 [252/718] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:25.732 [253/718] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:25.732 [254/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:25.992 [255/718] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:25.992 [256/718] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:25.992 [257/718] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:25.992 [258/718] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:25.992 [259/718] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.992 [260/718] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:25.992 [261/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:25.992 [262/718] Linking static target lib/librte_latencystats.a 00:02:25.992 [263/718] Linking static target lib/librte_gso.a 00:02:25.992 [264/718] Linking static target lib/librte_reorder.a 00:02:25.992 [265/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:25.992 [266/718] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:25.992 [267/718] Linking static target lib/librte_gpudev.a 00:02:25.992 [268/718] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:25.992 [269/718] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:25.992 [270/718] Linking static target lib/librte_security.a 00:02:25.992 [271/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:25.992 [272/718] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:25.992 [273/718] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:25.992 [274/718] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:25.992 [275/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:25.992 [276/718] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.992 [277/718] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:25.992 [278/718] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:25.992 [279/718] Linking static target lib/librte_rcu.a 00:02:25.992 [280/718] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:25.992 [281/718] Linking static target lib/librte_ip_frag.a 00:02:25.992 [282/718] Linking static target lib/librte_power.a 00:02:25.992 [283/718] Linking static target lib/librte_pcapng.a 00:02:25.992 [284/718] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.992 [285/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:25.992 [286/718] Linking static target lib/librte_mbuf.a 00:02:25.992 [287/718] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.992 [288/718] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:25.992 [289/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:25.992 [290/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:25.992 [291/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:25.992 [292/718] Linking static target lib/librte_bpf.a 00:02:25.992 [293/718] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:25.992 [294/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:25.992 [295/718] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:26.263 [296/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:26.263 [297/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:26.263 [298/718] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:26.263 [299/718] Linking static target lib/librte_rib.a 00:02:26.263 [300/718] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:26.263 [301/718] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:26.263 [302/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:26.263 [303/718] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:26.263 [304/718] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:26.263 [305/718] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.263 [306/718] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:26.263 [307/718] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.263 [308/718] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:02:26.263 [309/718] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.263 [310/718] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:26.263 [311/718] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:26.263 [312/718] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:26.263 [313/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:26.263 [314/718] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:26.263 [315/718] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:26.263 [316/718] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:26.263 [317/718] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:26.263 [318/718] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.263 [319/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:26.263 [320/718] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:26.263 [321/718] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:26.263 [322/718] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:26.263 [323/718] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:26.263 [324/718] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.263 [325/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:26.263 [326/718] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:26.263 [327/718] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:26.263 [328/718] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.263 [329/718] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:26.263 [330/718] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:26.263 [331/718] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:26.263 [332/718] Linking static target lib/librte_lpm.a 00:02:26.263 [333/718] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.263 [334/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:26.263 [335/718] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:26.263 [336/718] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:26.263 [337/718] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:26.263 [338/718] Linking target lib/librte_telemetry.so.25.0 00:02:26.263 [339/718] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:26.529 [340/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:26.529 [341/718] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:26.529 [342/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:26.529 [343/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:26.529 [344/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:26.529 [345/718] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:26.529 [346/718] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:26.529 [347/718] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.529 [348/718] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:26.529 [349/718] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.529 [350/718] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.529 [351/718] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.529 [352/718] Linking static target lib/librte_efd.a 00:02:26.529 [353/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:26.529 [354/718] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:26.529 [355/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:26.529 [356/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:26.529 [357/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:26.529 [358/718] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:26.529 [359/718] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.529 [360/718] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:26.529 [361/718] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.529 [362/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:26.529 [363/718] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:26.529 [364/718] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:26.529 [365/718] Linking static target lib/librte_fib.a 00:02:26.529 [366/718] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:26.529 [367/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:26.529 [368/718] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:26.529 [369/718] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:26.529 [370/718] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:02:26.529 [371/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:26.529 [372/718] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:26.529 [373/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:26.529 [374/718] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.529 [375/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:26.865 [376/718] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:26.865 [377/718] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.865 [378/718] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.865 [379/718] Linking static target lib/librte_graph.a 00:02:26.865 [380/718] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:26.865 [381/718] Linking static target lib/librte_pdump.a 00:02:26.865 [382/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:26.865 [383/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:26.865 [384/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:26.865 [385/718] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:26.865 [386/718] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:26.865 [387/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:26.865 [388/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:26.865 [389/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:26.865 [390/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:26.865 [391/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:26.865 [392/718] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.865 [393/718] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:26.865 [394/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:26.865 [395/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:26.865 [396/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:26.865 [397/718] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:26.865 [398/718] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:26.865 [399/718] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:26.865 [400/718] Linking static target drivers/librte_bus_vdev.a 00:02:26.865 [401/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:26.865 [402/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:26.865 [403/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:26.865 [404/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:26.865 [405/718] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.865 [406/718] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.865 [407/718] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:26.865 [408/718] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.865 [409/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:26.865 [410/718] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:26.865 [411/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:26.865 [412/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:26.865 [413/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:26.866 [414/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:26.866 [415/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:26.866 [416/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:26.866 [417/718] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:26.866 [418/718] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:27.125 [419/718] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:27.125 [420/718] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.125 [421/718] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:27.125 [422/718] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:27.125 [423/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:27.125 [424/718] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:27.125 [425/718] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:27.125 [426/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:27.125 [427/718] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:27.125 [428/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:27.125 [429/718] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:27.125 [430/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:27.125 [431/718] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:02:27.125 [432/718] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:27.125 [433/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:27.125 [434/718] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:27.125 [435/718] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.125 [436/718] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.125 [437/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:27.125 [438/718] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:27.125 [439/718] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:27.125 [440/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:27.125 [441/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:27.125 [442/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:27.125 [443/718] Linking static target lib/librte_table.a 00:02:27.125 [444/718] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.125 [445/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:27.125 [446/718] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:27.125 [447/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:27.125 [448/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:27.126 [449/718] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:27.126 [450/718] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:27.126 [451/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:27.126 [452/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:27.126 [453/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:27.126 [454/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:27.126 [455/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:27.126 [456/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:27.126 [457/718] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:27.126 [458/718] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:27.126 [459/718] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:27.126 [460/718] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:27.126 [461/718] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.126 [462/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:27.126 [463/718] Linking static target drivers/librte_bus_pci.a 00:02:27.126 [464/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:27.126 [465/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:27.126 [466/718] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.126 [467/718] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:27.126 [468/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:27.126 [469/718] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:27.385 [470/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:27.385 [471/718] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:27.385 [472/718] Linking static target lib/librte_cryptodev.a 00:02:27.385 [473/718] Linking static target lib/librte_sched.a 00:02:27.385 [474/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:27.385 [475/718] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:27.385 [476/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:27.385 [477/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:27.385 [478/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:27.385 [479/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:27.385 [480/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:27.385 [481/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:27.385 [482/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:27.385 [483/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:27.385 [484/718] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:27.385 [485/718] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:27.385 [486/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:27.385 [487/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:27.385 [488/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:27.385 [489/718] Linking static target lib/librte_node.a 00:02:27.385 [490/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:27.385 [491/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:27.385 [492/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:27.385 [493/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:27.385 [494/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:27.385 [495/718] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:27.385 [496/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:27.385 [497/718] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:27.385 [498/718] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:27.385 [499/718] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:27.385 [500/718] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.385 [501/718] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:27.385 [502/718] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:27.385 [503/718] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:27.385 [504/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:27.385 [505/718] Linking static target drivers/librte_mempool_ring.a 00:02:27.385 [506/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:27.385 [507/718] Linking static target lib/librte_member.a 00:02:27.385 [508/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:27.385 [509/718] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:27.644 [510/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:27.644 [511/718] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:27.644 [512/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:27.644 [513/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:27.644 [514/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:27.644 [515/718] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:27.644 [516/718] Linking static target lib/librte_pdcp.a 00:02:27.644 [517/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:27.645 [518/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:27.645 [519/718] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:27.645 [520/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:27.645 [521/718] Linking static target lib/acl/libavx2_tmp.a 00:02:27.645 [522/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:27.645 [523/718] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:02:27.645 [524/718] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:27.645 [525/718] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:27.645 [526/718] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:27.645 [527/718] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:27.645 [528/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:27.645 [529/718] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:27.645 [530/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:27.645 [531/718] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:27.645 [532/718] Linking static target lib/librte_ipsec.a 00:02:27.645 [533/718] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:27.645 [534/718] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:27.645 [535/718] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:27.645 [536/718] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:27.645 [537/718] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:27.645 [538/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:27.645 [539/718] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:27.645 [540/718] Linking static target lib/librte_port.a 00:02:27.645 [541/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:27.905 [542/718] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:27.905 [543/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:27.905 [544/718] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:27.905 [545/718] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:27.905 [546/718] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:27.905 [547/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:27.905 [548/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:27.905 [549/718] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.905 [550/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:27.905 [551/718] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.905 [552/718] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:27.905 [553/718] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:27.905 [554/718] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:27.905 [555/718] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:27.905 [556/718] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.905 [557/718] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:27.905 [558/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:27.905 [559/718] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:27.905 [560/718] Linking static target lib/librte_hash.a 00:02:27.905 [561/718] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.905 [562/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:27.905 [563/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:27.905 [564/718] Linking static target lib/librte_acl.a 00:02:27.905 [565/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:28.165 [566/718] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.165 [567/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:28.165 [568/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:28.165 [569/718] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:28.165 [570/718] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.165 [571/718] Linking static target lib/librte_eventdev.a 00:02:28.165 [572/718] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:28.165 [573/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:28.165 [574/718] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.165 [575/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:28.165 [576/718] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:28.165 [577/718] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.426 [578/718] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:28.426 [579/718] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:28.426 [580/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:28.426 [581/718] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.687 [582/718] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.688 [583/718] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:28.688 [584/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:28.950 [585/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:28.950 [586/718] Linking static target lib/librte_ethdev.a 00:02:28.950 [587/718] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.211 [588/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:29.211 [589/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:29.473 [590/718] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:29.473 [591/718] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.735 [592/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:29.735 [593/718] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:29.735 [594/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:29.997 [595/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:29.997 [596/718] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:29.997 [597/718] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:29.997 [598/718] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:30.258 [599/718] Linking static target drivers/librte_net_i40e.a 00:02:31.202 [600/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:31.202 [601/718] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.202 [602/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:31.773 [603/718] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.078 [604/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:35.078 [605/718] Linking static target lib/librte_pipeline.a 00:02:36.994 [606/718] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.255 [607/718] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:37.255 [608/718] Linking target lib/librte_eal.so.25.0 00:02:37.255 [609/718] Linking static target lib/librte_vhost.a 00:02:37.255 [610/718] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:02:37.516 [611/718] Linking target lib/librte_timer.so.25.0 00:02:37.516 [612/718] Linking target lib/librte_ring.so.25.0 00:02:37.516 [613/718] Linking target lib/librte_meter.so.25.0 00:02:37.516 [614/718] Linking target lib/librte_pci.so.25.0 00:02:37.516 [615/718] Linking target lib/librte_cfgfile.so.25.0 00:02:37.516 [616/718] Linking target drivers/librte_bus_vdev.so.25.0 00:02:37.516 [617/718] Linking target lib/librte_jobstats.so.25.0 00:02:37.516 [618/718] Linking target lib/librte_dmadev.so.25.0 00:02:37.516 [619/718] Linking target lib/librte_stack.so.25.0 00:02:37.516 [620/718] Linking target lib/librte_rawdev.so.25.0 00:02:37.516 [621/718] Linking target lib/librte_acl.so.25.0 00:02:37.516 [622/718] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:02:37.516 [623/718] Linking target app/dpdk-test-acl 00:02:37.516 [624/718] Linking target app/dpdk-test-mldev 00:02:37.516 [625/718] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:02:37.516 [626/718] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:02:37.516 [627/718] Linking target app/dpdk-test-dma-perf 00:02:37.516 [628/718] Linking target app/dpdk-dumpcap 00:02:37.516 [629/718] Linking target app/dpdk-test-cmdline 00:02:37.516 [630/718] Linking target app/dpdk-test-bbdev 00:02:37.516 [631/718] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:02:37.516 [632/718] Linking target app/dpdk-test-crypto-perf 00:02:37.516 [633/718] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:02:37.516 [634/718] Linking target app/dpdk-test-eventdev 00:02:37.516 [635/718] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:02:37.516 [636/718] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:02:37.516 [637/718] Linking target app/dpdk-testpmd 00:02:37.516 [638/718] Linking target drivers/librte_bus_pci.so.25.0 00:02:37.516 [639/718] Linking target app/dpdk-test-fib 00:02:37.516 [640/718] Linking target app/dpdk-test-flow-perf 00:02:37.516 [641/718] Linking target app/dpdk-test-regex 00:02:37.516 [642/718] Linking target app/dpdk-pdump 00:02:37.516 [643/718] Linking target lib/librte_rcu.so.25.0 00:02:37.516 [644/718] Linking target lib/librte_mempool.so.25.0 00:02:37.516 [645/718] Linking target app/dpdk-test-gpudev 00:02:37.516 [646/718] Linking target app/dpdk-test-sad 00:02:37.516 [647/718] Linking target app/dpdk-proc-info 00:02:37.516 [648/718] Linking target app/dpdk-test-security-perf 00:02:37.516 [649/718] Linking target app/dpdk-graph 00:02:37.516 [650/718] Linking target app/dpdk-test-pipeline 00:02:37.516 [651/718] Linking target app/dpdk-test-compress-perf 00:02:37.776 [652/718] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:02:37.776 [653/718] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:02:37.776 [654/718] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:02:37.776 [655/718] Linking target lib/librte_mbuf.so.25.0 00:02:37.776 [656/718] Linking target lib/librte_rib.so.25.0 00:02:37.776 [657/718] Linking target drivers/librte_mempool_ring.so.25.0 00:02:37.776 [658/718] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:02:38.038 [659/718] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:02:38.038 [660/718] Linking target lib/librte_bbdev.so.25.0 00:02:38.038 [661/718] Linking target lib/librte_reorder.so.25.0 00:02:38.038 [662/718] Linking target lib/librte_compressdev.so.25.0 00:02:38.038 [663/718] Linking target lib/librte_net.so.25.0 00:02:38.038 [664/718] Linking target lib/librte_gpudev.so.25.0 00:02:38.038 [665/718] Linking target lib/librte_distributor.so.25.0 00:02:38.038 [666/718] Linking target lib/librte_mldev.so.25.0 00:02:38.038 [667/718] Linking target lib/librte_regexdev.so.25.0 00:02:38.038 [668/718] Linking target lib/librte_sched.so.25.0 00:02:38.038 [669/718] Linking target lib/librte_cryptodev.so.25.0 00:02:38.038 [670/718] Linking target lib/librte_fib.so.25.0 00:02:38.038 [671/718] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.038 [672/718] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:02:38.038 [673/718] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:02:38.038 [674/718] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:02:38.038 [675/718] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:02:38.038 [676/718] Linking target lib/librte_security.so.25.0 00:02:38.038 [677/718] Linking target lib/librte_cmdline.so.25.0 00:02:38.038 [678/718] Linking target lib/librte_hash.so.25.0 00:02:38.038 [679/718] Linking target lib/librte_ethdev.so.25.0 00:02:38.299 [680/718] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:02:38.299 [681/718] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:02:38.299 [682/718] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:02:38.299 [683/718] Linking target lib/librte_pdcp.so.25.0 00:02:38.299 [684/718] Linking target lib/librte_efd.so.25.0 00:02:38.299 [685/718] Linking target lib/librte_lpm.so.25.0 00:02:38.299 [686/718] Linking target lib/librte_member.so.25.0 00:02:38.299 [687/718] Linking target lib/librte_ipsec.so.25.0 00:02:38.299 [688/718] Linking target lib/librte_ip_frag.so.25.0 00:02:38.299 [689/718] Linking target lib/librte_pcapng.so.25.0 00:02:38.299 [690/718] Linking target lib/librte_metrics.so.25.0 00:02:38.299 [691/718] Linking target lib/librte_gro.so.25.0 00:02:38.299 [692/718] Linking target lib/librte_gso.so.25.0 00:02:38.299 [693/718] Linking target lib/librte_bpf.so.25.0 00:02:38.299 [694/718] Linking target lib/librte_power.so.25.0 00:02:38.299 [695/718] Linking target lib/librte_eventdev.so.25.0 00:02:38.299 [696/718] Linking target drivers/librte_net_i40e.so.25.0 00:02:38.560 [697/718] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:02:38.560 [698/718] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:02:38.560 [699/718] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:02:38.560 [700/718] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:02:38.560 [701/718] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:02:38.560 [702/718] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:02:38.560 [703/718] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:02:38.560 [704/718] Linking target lib/librte_latencystats.so.25.0 00:02:38.560 [705/718] Linking target lib/librte_bitratestats.so.25.0 00:02:38.560 [706/718] Linking target lib/librte_dispatcher.so.25.0 00:02:38.560 [707/718] Linking target lib/librte_pdump.so.25.0 00:02:38.560 [708/718] Linking target lib/librte_graph.so.25.0 00:02:38.560 [709/718] Linking target lib/librte_port.so.25.0 00:02:38.560 [710/718] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:02:38.821 [711/718] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:02:38.821 [712/718] Linking target lib/librte_node.so.25.0 00:02:38.821 [713/718] Linking target lib/librte_table.so.25.0 00:02:38.821 [714/718] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:02:39.392 [715/718] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.392 [716/718] Linking target lib/librte_vhost.so.25.0 00:02:40.777 [717/718] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.777 [718/718] Linking target lib/librte_pipeline.so.25.0 00:02:40.777 13:58:44 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:40.777 13:58:44 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:40.777 13:58:44 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 install 00:02:40.777 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:40.777 [0/1] Installing files. 00:02:41.042 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/memory.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/cpu.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/counters.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:41.043 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.043 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_eddsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:41.044 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.045 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:41.046 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.047 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:41.048 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:41.048 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.048 Installing lib/librte_log.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.048 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.048 Installing lib/librte_kvargs.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.048 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.048 Installing lib/librte_argparse.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.048 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.048 Installing lib/librte_telemetry.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.048 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.048 Installing lib/librte_eal.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.048 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.048 Installing lib/librte_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.048 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.048 Installing lib/librte_rcu.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.048 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.048 Installing lib/librte_mempool.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_mbuf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_net.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_meter.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_ethdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_cmdline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_metrics.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_hash.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_timer.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_acl.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_bbdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_bitratestats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_bpf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_cfgfile.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_compressdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_cryptodev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_distributor.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_dmadev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_efd.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_eventdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_dispatcher.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_gpudev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_gro.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_gso.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_ip_frag.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_jobstats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.049 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_latencystats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_lpm.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_member.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_pcapng.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_power.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_rawdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_regexdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_mldev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_rib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_reorder.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_sched.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_security.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_stack.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_vhost.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_ipsec.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_pdcp.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_fib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_port.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_pdump.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_table.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_pipeline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.314 Installing lib/librte_graph.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.315 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.315 Installing lib/librte_node.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.315 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.315 Installing drivers/librte_bus_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:02:41.315 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.315 Installing drivers/librte_bus_vdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:02:41.315 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.315 Installing drivers/librte_mempool_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:02:41.315 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.315 Installing drivers/librte_net_i40e.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:02:41.315 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.315 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ptr_compress/rte_ptr_compress.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.316 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.317 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.318 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry-exporter.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:41.319 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:41.319 Installing symlink pointing to librte_log.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.25 00:02:41.319 Installing symlink pointing to librte_log.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:41.319 Installing symlink pointing to librte_kvargs.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.25 00:02:41.319 Installing symlink pointing to librte_kvargs.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:41.319 Installing symlink pointing to librte_argparse.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.25 00:02:41.319 Installing symlink pointing to librte_argparse.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:02:41.319 Installing symlink pointing to librte_telemetry.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.25 00:02:41.319 Installing symlink pointing to librte_telemetry.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:41.319 Installing symlink pointing to librte_eal.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.25 00:02:41.319 Installing symlink pointing to librte_eal.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:41.319 Installing symlink pointing to librte_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.25 00:02:41.319 Installing symlink pointing to librte_ring.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:41.319 Installing symlink pointing to librte_rcu.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.25 00:02:41.319 Installing symlink pointing to librte_rcu.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:41.319 Installing symlink pointing to librte_mempool.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.25 00:02:41.319 Installing symlink pointing to librte_mempool.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:41.319 Installing symlink pointing to librte_mbuf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.25 00:02:41.319 Installing symlink pointing to librte_mbuf.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:41.319 Installing symlink pointing to librte_net.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.25 00:02:41.319 Installing symlink pointing to librte_net.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:41.319 Installing symlink pointing to librte_meter.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.25 00:02:41.319 Installing symlink pointing to librte_meter.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:41.319 Installing symlink pointing to librte_ethdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.25 00:02:41.319 Installing symlink pointing to librte_ethdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:41.319 Installing symlink pointing to librte_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.25 00:02:41.319 Installing symlink pointing to librte_pci.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:41.319 Installing symlink pointing to librte_cmdline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.25 00:02:41.319 Installing symlink pointing to librte_cmdline.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:41.319 Installing symlink pointing to librte_metrics.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.25 00:02:41.319 Installing symlink pointing to librte_metrics.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:41.319 Installing symlink pointing to librte_hash.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.25 00:02:41.319 Installing symlink pointing to librte_hash.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:41.319 Installing symlink pointing to librte_timer.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.25 00:02:41.319 Installing symlink pointing to librte_timer.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:41.319 Installing symlink pointing to librte_acl.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.25 00:02:41.319 Installing symlink pointing to librte_acl.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:41.319 Installing symlink pointing to librte_bbdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.25 00:02:41.319 Installing symlink pointing to librte_bbdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:41.319 Installing symlink pointing to librte_bitratestats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.25 00:02:41.319 Installing symlink pointing to librte_bitratestats.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:41.319 Installing symlink pointing to librte_bpf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.25 00:02:41.319 Installing symlink pointing to librte_bpf.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:41.319 Installing symlink pointing to librte_cfgfile.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.25 00:02:41.319 Installing symlink pointing to librte_cfgfile.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:41.319 Installing symlink pointing to librte_compressdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.25 00:02:41.319 Installing symlink pointing to librte_compressdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:41.319 Installing symlink pointing to librte_cryptodev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.25 00:02:41.319 Installing symlink pointing to librte_cryptodev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:41.319 Installing symlink pointing to librte_distributor.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.25 00:02:41.319 Installing symlink pointing to librte_distributor.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:41.319 Installing symlink pointing to librte_dmadev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.25 00:02:41.319 Installing symlink pointing to librte_dmadev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:41.320 Installing symlink pointing to librte_efd.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.25 00:02:41.320 Installing symlink pointing to librte_efd.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:41.320 Installing symlink pointing to librte_eventdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.25 00:02:41.320 Installing symlink pointing to librte_eventdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:41.320 Installing symlink pointing to librte_dispatcher.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.25 00:02:41.320 Installing symlink pointing to librte_dispatcher.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:41.320 Installing symlink pointing to librte_gpudev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.25 00:02:41.320 Installing symlink pointing to librte_gpudev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:41.320 Installing symlink pointing to librte_gro.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.25 00:02:41.320 Installing symlink pointing to librte_gro.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:41.320 Installing symlink pointing to librte_gso.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.25 00:02:41.320 Installing symlink pointing to librte_gso.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:41.320 Installing symlink pointing to librte_ip_frag.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.25 00:02:41.320 Installing symlink pointing to librte_ip_frag.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:41.320 Installing symlink pointing to librte_jobstats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.25 00:02:41.320 Installing symlink pointing to librte_jobstats.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:41.320 Installing symlink pointing to librte_latencystats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.25 00:02:41.320 Installing symlink pointing to librte_latencystats.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:41.320 Installing symlink pointing to librte_lpm.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.25 00:02:41.320 Installing symlink pointing to librte_lpm.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:41.320 Installing symlink pointing to librte_member.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.25 00:02:41.320 Installing symlink pointing to librte_member.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:41.320 Installing symlink pointing to librte_pcapng.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.25 00:02:41.320 Installing symlink pointing to librte_pcapng.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:41.320 Installing symlink pointing to librte_power.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.25 00:02:41.320 Installing symlink pointing to librte_power.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:41.320 Installing symlink pointing to librte_rawdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.25 00:02:41.320 Installing symlink pointing to librte_rawdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:41.320 Installing symlink pointing to librte_regexdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.25 00:02:41.320 Installing symlink pointing to librte_regexdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:41.320 Installing symlink pointing to librte_mldev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.25 00:02:41.320 Installing symlink pointing to librte_mldev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:41.320 Installing symlink pointing to librte_rib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.25 00:02:41.320 Installing symlink pointing to librte_rib.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:41.320 Installing symlink pointing to librte_reorder.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.25 00:02:41.320 Installing symlink pointing to librte_reorder.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:41.320 Installing symlink pointing to librte_sched.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.25 00:02:41.320 Installing symlink pointing to librte_sched.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:41.320 Installing symlink pointing to librte_security.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.25 00:02:41.320 Installing symlink pointing to librte_security.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:41.320 Installing symlink pointing to librte_stack.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.25 00:02:41.320 Installing symlink pointing to librte_stack.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:41.320 Installing symlink pointing to librte_vhost.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.25 00:02:41.320 Installing symlink pointing to librte_vhost.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:41.320 Installing symlink pointing to librte_ipsec.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.25 00:02:41.320 Installing symlink pointing to librte_ipsec.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:41.320 Installing symlink pointing to librte_pdcp.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.25 00:02:41.320 Installing symlink pointing to librte_pdcp.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:41.320 Installing symlink pointing to librte_fib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.25 00:02:41.320 Installing symlink pointing to librte_fib.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:41.320 Installing symlink pointing to librte_port.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.25 00:02:41.320 Installing symlink pointing to librte_port.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:41.320 Installing symlink pointing to librte_pdump.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.25 00:02:41.320 Installing symlink pointing to librte_pdump.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:41.320 Installing symlink pointing to librte_table.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.25 00:02:41.320 Installing symlink pointing to librte_table.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:41.320 Installing symlink pointing to librte_pipeline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.25 00:02:41.320 Installing symlink pointing to librte_pipeline.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:41.320 Installing symlink pointing to librte_graph.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.25 00:02:41.320 Installing symlink pointing to librte_graph.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:41.320 Installing symlink pointing to librte_node.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.25 00:02:41.320 Installing symlink pointing to librte_node.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:41.320 Installing symlink pointing to librte_bus_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:02:41.320 Installing symlink pointing to librte_bus_pci.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:02:41.320 Installing symlink pointing to librte_bus_vdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:02:41.320 Installing symlink pointing to librte_bus_vdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:02:41.320 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:02:41.320 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:02:41.320 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:02:41.320 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:02:41.320 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:02:41.320 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:02:41.320 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:02:41.320 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:02:41.320 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:02:41.320 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:02:41.320 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:02:41.320 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:02:41.320 Installing symlink pointing to librte_mempool_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:02:41.320 Installing symlink pointing to librte_mempool_ring.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:02:41.320 Installing symlink pointing to librte_net_i40e.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:02:41.320 Installing symlink pointing to librte_net_i40e.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:02:41.320 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:02:41.320 13:58:44 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:41.320 13:58:44 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:41.320 00:02:41.320 real 0m25.087s 00:02:41.320 user 7m37.316s 00:02:41.320 sys 3m46.087s 00:02:41.320 13:58:44 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:41.320 13:58:44 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:41.320 ************************************ 00:02:41.320 END TEST build_native_dpdk 00:02:41.320 ************************************ 00:02:41.582 13:58:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:41.582 13:58:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:41.582 13:58:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:41.582 13:58:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:41.582 13:58:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:41.582 13:58:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:41.582 13:58:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:41.582 13:58:45 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:41.582 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:41.844 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:41.844 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:41.844 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:42.105 Using 'verbs' RDMA provider 00:02:57.963 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:10.388 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:10.960 Creating mk/config.mk...done. 00:03:10.960 Creating mk/cc.flags.mk...done. 00:03:10.960 Type 'make' to build. 00:03:10.960 13:59:14 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:03:10.960 13:59:14 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:10.960 13:59:14 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:10.960 13:59:14 -- common/autotest_common.sh@10 -- $ set +x 00:03:10.960 ************************************ 00:03:10.960 START TEST make 00:03:10.960 ************************************ 00:03:10.960 13:59:14 make -- common/autotest_common.sh@1125 -- $ make -j144 00:03:11.533 make[1]: Nothing to be done for 'all'. 00:03:12.916 The Meson build system 00:03:12.916 Version: 1.5.0 00:03:12.916 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:12.916 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:12.916 Build type: native build 00:03:12.916 Project name: libvfio-user 00:03:12.916 Project version: 0.0.1 00:03:12.916 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:12.916 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:12.916 Host machine cpu family: x86_64 00:03:12.916 Host machine cpu: x86_64 00:03:12.916 Run-time dependency threads found: YES 00:03:12.916 Library dl found: YES 00:03:12.916 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:12.916 Run-time dependency json-c found: YES 0.17 00:03:12.916 Run-time dependency cmocka found: YES 1.1.7 00:03:12.916 Program pytest-3 found: NO 00:03:12.916 Program flake8 found: NO 00:03:12.916 Program misspell-fixer found: NO 00:03:12.916 Program restructuredtext-lint found: NO 00:03:12.916 Program valgrind found: YES (/usr/bin/valgrind) 00:03:12.916 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:12.916 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:12.916 Compiler for C supports arguments -Wwrite-strings: YES 00:03:12.916 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:12.916 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:12.916 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:12.916 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:12.916 Build targets in project: 8 00:03:12.916 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:12.916 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:12.916 00:03:12.916 libvfio-user 0.0.1 00:03:12.916 00:03:12.916 User defined options 00:03:12.916 buildtype : debug 00:03:12.916 default_library: shared 00:03:12.916 libdir : /usr/local/lib 00:03:12.916 00:03:12.916 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:13.176 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:13.436 [1/37] Compiling C object samples/null.p/null.c.o 00:03:13.436 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:13.436 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:13.436 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:13.436 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:13.436 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:13.436 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:13.436 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:13.436 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:13.436 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:13.436 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:13.436 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:13.436 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:13.436 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:13.436 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:13.436 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:13.436 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:13.436 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:13.436 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:13.436 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:13.436 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:13.436 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:13.436 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:13.436 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:13.436 [25/37] Compiling C object samples/server.p/server.c.o 00:03:13.436 [26/37] Compiling C object samples/client.p/client.c.o 00:03:13.436 [27/37] Linking target samples/client 00:03:13.698 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:13.698 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:13.698 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:13.698 [31/37] Linking target test/unit_tests 00:03:13.698 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:13.698 [33/37] Linking target samples/server 00:03:13.698 [34/37] Linking target samples/null 00:03:13.698 [35/37] Linking target samples/lspci 00:03:13.698 [36/37] Linking target samples/shadow_ioeventfd_server 00:03:13.698 [37/37] Linking target samples/gpio-pci-idio-16 00:03:13.698 INFO: autodetecting backend as ninja 00:03:13.698 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:13.959 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:14.219 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:14.219 ninja: no work to do. 00:03:36.193 CC lib/log/log.o 00:03:36.193 CC lib/log/log_flags.o 00:03:36.193 CC lib/log/log_deprecated.o 00:03:36.193 CC lib/ut/ut.o 00:03:36.193 CC lib/ut_mock/mock.o 00:03:36.455 LIB libspdk_ut_mock.a 00:03:36.455 SO libspdk_ut_mock.so.6.0 00:03:36.455 LIB libspdk_ut.a 00:03:36.455 LIB libspdk_log.a 00:03:36.455 SO libspdk_ut.so.2.0 00:03:36.455 SO libspdk_log.so.7.1 00:03:36.455 SYMLINK libspdk_ut_mock.so 00:03:36.455 SYMLINK libspdk_ut.so 00:03:36.455 SYMLINK libspdk_log.so 00:03:37.028 CC lib/util/base64.o 00:03:37.028 CC lib/util/bit_array.o 00:03:37.028 CC lib/util/cpuset.o 00:03:37.028 CC lib/ioat/ioat.o 00:03:37.028 CC lib/util/crc16.o 00:03:37.028 CC lib/util/crc32.o 00:03:37.028 CC lib/util/crc32c.o 00:03:37.028 CC lib/util/crc32_ieee.o 00:03:37.028 CC lib/dma/dma.o 00:03:37.028 CC lib/util/crc64.o 00:03:37.028 CC lib/util/dif.o 00:03:37.028 CC lib/util/fd.o 00:03:37.028 CXX lib/trace_parser/trace.o 00:03:37.028 CC lib/util/fd_group.o 00:03:37.028 CC lib/util/file.o 00:03:37.028 CC lib/util/hexlify.o 00:03:37.028 CC lib/util/iov.o 00:03:37.028 CC lib/util/math.o 00:03:37.028 CC lib/util/net.o 00:03:37.028 CC lib/util/pipe.o 00:03:37.028 CC lib/util/strerror_tls.o 00:03:37.028 CC lib/util/string.o 00:03:37.028 CC lib/util/uuid.o 00:03:37.028 CC lib/util/xor.o 00:03:37.028 CC lib/util/zipf.o 00:03:37.028 CC lib/util/md5.o 00:03:37.028 CC lib/vfio_user/host/vfio_user_pci.o 00:03:37.028 CC lib/vfio_user/host/vfio_user.o 00:03:37.290 LIB libspdk_dma.a 00:03:37.290 SO libspdk_dma.so.5.0 00:03:37.290 LIB libspdk_ioat.a 00:03:37.290 SYMLINK libspdk_dma.so 00:03:37.290 SO libspdk_ioat.so.7.0 00:03:37.290 SYMLINK libspdk_ioat.so 00:03:37.290 LIB libspdk_vfio_user.a 00:03:37.290 SO libspdk_vfio_user.so.5.0 00:03:37.551 LIB libspdk_util.a 00:03:37.552 SYMLINK libspdk_vfio_user.so 00:03:37.552 SO libspdk_util.so.10.0 00:03:37.552 SYMLINK libspdk_util.so 00:03:37.813 LIB libspdk_trace_parser.a 00:03:37.813 SO libspdk_trace_parser.so.6.0 00:03:37.813 SYMLINK libspdk_trace_parser.so 00:03:38.074 CC lib/rdma_provider/common.o 00:03:38.074 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:38.074 CC lib/conf/conf.o 00:03:38.074 CC lib/vmd/vmd.o 00:03:38.074 CC lib/env_dpdk/env.o 00:03:38.074 CC lib/vmd/led.o 00:03:38.074 CC lib/env_dpdk/memory.o 00:03:38.074 CC lib/env_dpdk/pci.o 00:03:38.074 CC lib/idxd/idxd.o 00:03:38.074 CC lib/env_dpdk/init.o 00:03:38.074 CC lib/idxd/idxd_user.o 00:03:38.074 CC lib/env_dpdk/threads.o 00:03:38.074 CC lib/env_dpdk/pci_ioat.o 00:03:38.074 CC lib/idxd/idxd_kernel.o 00:03:38.074 CC lib/rdma_utils/rdma_utils.o 00:03:38.074 CC lib/json/json_parse.o 00:03:38.074 CC lib/env_dpdk/pci_virtio.o 00:03:38.074 CC lib/json/json_util.o 00:03:38.074 CC lib/env_dpdk/pci_vmd.o 00:03:38.074 CC lib/json/json_write.o 00:03:38.074 CC lib/env_dpdk/pci_idxd.o 00:03:38.074 CC lib/env_dpdk/pci_event.o 00:03:38.074 CC lib/env_dpdk/sigbus_handler.o 00:03:38.074 CC lib/env_dpdk/pci_dpdk.o 00:03:38.074 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:38.074 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:38.336 LIB libspdk_rdma_provider.a 00:03:38.336 LIB libspdk_conf.a 00:03:38.336 LIB libspdk_rdma_utils.a 00:03:38.336 SO libspdk_rdma_provider.so.6.0 00:03:38.336 SO libspdk_conf.so.6.0 00:03:38.336 SO libspdk_rdma_utils.so.1.0 00:03:38.336 LIB libspdk_json.a 00:03:38.336 SYMLINK libspdk_rdma_provider.so 00:03:38.336 SYMLINK libspdk_conf.so 00:03:38.336 SO libspdk_json.so.6.0 00:03:38.336 SYMLINK libspdk_rdma_utils.so 00:03:38.336 SYMLINK libspdk_json.so 00:03:38.611 LIB libspdk_idxd.a 00:03:38.611 LIB libspdk_vmd.a 00:03:38.611 SO libspdk_idxd.so.12.1 00:03:38.611 SO libspdk_vmd.so.6.0 00:03:38.611 SYMLINK libspdk_idxd.so 00:03:38.611 SYMLINK libspdk_vmd.so 00:03:38.882 CC lib/jsonrpc/jsonrpc_server.o 00:03:38.882 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:38.882 CC lib/jsonrpc/jsonrpc_client.o 00:03:38.882 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:39.143 LIB libspdk_jsonrpc.a 00:03:39.143 SO libspdk_jsonrpc.so.6.0 00:03:39.143 SYMLINK libspdk_jsonrpc.so 00:03:39.405 LIB libspdk_env_dpdk.a 00:03:39.405 SO libspdk_env_dpdk.so.15.0 00:03:39.405 SYMLINK libspdk_env_dpdk.so 00:03:39.405 CC lib/rpc/rpc.o 00:03:39.667 LIB libspdk_rpc.a 00:03:39.667 SO libspdk_rpc.so.6.0 00:03:39.928 SYMLINK libspdk_rpc.so 00:03:40.189 CC lib/trace/trace.o 00:03:40.189 CC lib/trace/trace_flags.o 00:03:40.189 CC lib/notify/notify.o 00:03:40.189 CC lib/trace/trace_rpc.o 00:03:40.189 CC lib/notify/notify_rpc.o 00:03:40.189 CC lib/keyring/keyring.o 00:03:40.189 CC lib/keyring/keyring_rpc.o 00:03:40.450 LIB libspdk_notify.a 00:03:40.450 SO libspdk_notify.so.6.0 00:03:40.450 LIB libspdk_trace.a 00:03:40.450 LIB libspdk_keyring.a 00:03:40.450 SO libspdk_trace.so.11.0 00:03:40.450 SO libspdk_keyring.so.2.0 00:03:40.450 SYMLINK libspdk_notify.so 00:03:40.450 SYMLINK libspdk_keyring.so 00:03:40.450 SYMLINK libspdk_trace.so 00:03:41.021 CC lib/sock/sock.o 00:03:41.021 CC lib/sock/sock_rpc.o 00:03:41.021 CC lib/thread/thread.o 00:03:41.021 CC lib/thread/iobuf.o 00:03:41.281 LIB libspdk_sock.a 00:03:41.281 SO libspdk_sock.so.10.0 00:03:41.542 SYMLINK libspdk_sock.so 00:03:41.804 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:41.804 CC lib/nvme/nvme_ctrlr.o 00:03:41.804 CC lib/nvme/nvme_fabric.o 00:03:41.804 CC lib/nvme/nvme_ns_cmd.o 00:03:41.804 CC lib/nvme/nvme_ns.o 00:03:41.804 CC lib/nvme/nvme_pcie_common.o 00:03:41.804 CC lib/nvme/nvme_pcie.o 00:03:41.804 CC lib/nvme/nvme_qpair.o 00:03:41.804 CC lib/nvme/nvme.o 00:03:41.804 CC lib/nvme/nvme_quirks.o 00:03:41.804 CC lib/nvme/nvme_transport.o 00:03:41.804 CC lib/nvme/nvme_discovery.o 00:03:41.804 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:41.804 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:41.804 CC lib/nvme/nvme_tcp.o 00:03:41.804 CC lib/nvme/nvme_opal.o 00:03:41.804 CC lib/nvme/nvme_io_msg.o 00:03:41.804 CC lib/nvme/nvme_poll_group.o 00:03:41.804 CC lib/nvme/nvme_zns.o 00:03:41.804 CC lib/nvme/nvme_stubs.o 00:03:41.804 CC lib/nvme/nvme_auth.o 00:03:41.804 CC lib/nvme/nvme_cuse.o 00:03:41.804 CC lib/nvme/nvme_vfio_user.o 00:03:41.804 CC lib/nvme/nvme_rdma.o 00:03:42.376 LIB libspdk_thread.a 00:03:42.376 SO libspdk_thread.so.10.2 00:03:42.376 SYMLINK libspdk_thread.so 00:03:42.639 CC lib/vfu_tgt/tgt_endpoint.o 00:03:42.639 CC lib/vfu_tgt/tgt_rpc.o 00:03:42.639 CC lib/blob/blobstore.o 00:03:42.639 CC lib/init/json_config.o 00:03:42.639 CC lib/blob/request.o 00:03:42.639 CC lib/blob/zeroes.o 00:03:42.639 CC lib/accel/accel.o 00:03:42.639 CC lib/blob/blob_bs_dev.o 00:03:42.639 CC lib/accel/accel_rpc.o 00:03:42.639 CC lib/init/subsystem.o 00:03:42.639 CC lib/accel/accel_sw.o 00:03:42.639 CC lib/init/subsystem_rpc.o 00:03:42.639 CC lib/init/rpc.o 00:03:42.639 CC lib/fsdev/fsdev.o 00:03:42.639 CC lib/virtio/virtio.o 00:03:42.639 CC lib/virtio/virtio_vhost_user.o 00:03:42.639 CC lib/fsdev/fsdev_io.o 00:03:42.639 CC lib/virtio/virtio_vfio_user.o 00:03:42.639 CC lib/fsdev/fsdev_rpc.o 00:03:42.639 CC lib/virtio/virtio_pci.o 00:03:42.901 LIB libspdk_init.a 00:03:43.161 SO libspdk_init.so.6.0 00:03:43.161 LIB libspdk_vfu_tgt.a 00:03:43.161 SO libspdk_vfu_tgt.so.3.0 00:03:43.161 LIB libspdk_virtio.a 00:03:43.161 SYMLINK libspdk_init.so 00:03:43.161 SO libspdk_virtio.so.7.0 00:03:43.161 SYMLINK libspdk_vfu_tgt.so 00:03:43.161 SYMLINK libspdk_virtio.so 00:03:43.422 LIB libspdk_fsdev.a 00:03:43.422 SO libspdk_fsdev.so.1.0 00:03:43.422 CC lib/event/app.o 00:03:43.422 CC lib/event/reactor.o 00:03:43.422 CC lib/event/log_rpc.o 00:03:43.422 CC lib/event/app_rpc.o 00:03:43.422 CC lib/event/scheduler_static.o 00:03:43.422 SYMLINK libspdk_fsdev.so 00:03:43.684 LIB libspdk_accel.a 00:03:43.684 LIB libspdk_nvme.a 00:03:43.684 SO libspdk_accel.so.16.0 00:03:43.946 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:43.946 SYMLINK libspdk_accel.so 00:03:43.946 SO libspdk_nvme.so.14.0 00:03:43.946 LIB libspdk_event.a 00:03:43.946 SO libspdk_event.so.15.0 00:03:43.946 SYMLINK libspdk_event.so 00:03:44.208 SYMLINK libspdk_nvme.so 00:03:44.208 CC lib/bdev/bdev.o 00:03:44.208 CC lib/bdev/bdev_rpc.o 00:03:44.208 CC lib/bdev/bdev_zone.o 00:03:44.208 CC lib/bdev/part.o 00:03:44.208 CC lib/bdev/scsi_nvme.o 00:03:44.468 LIB libspdk_fuse_dispatcher.a 00:03:44.468 SO libspdk_fuse_dispatcher.so.1.0 00:03:44.468 SYMLINK libspdk_fuse_dispatcher.so 00:03:45.411 LIB libspdk_blob.a 00:03:45.411 SO libspdk_blob.so.11.0 00:03:45.411 SYMLINK libspdk_blob.so 00:03:45.985 CC lib/blobfs/blobfs.o 00:03:45.985 CC lib/blobfs/tree.o 00:03:45.985 CC lib/lvol/lvol.o 00:03:46.558 LIB libspdk_bdev.a 00:03:46.558 SO libspdk_bdev.so.17.0 00:03:46.558 SYMLINK libspdk_bdev.so 00:03:46.558 LIB libspdk_lvol.a 00:03:46.818 LIB libspdk_blobfs.a 00:03:46.818 SO libspdk_lvol.so.10.0 00:03:46.818 SO libspdk_blobfs.so.10.0 00:03:46.818 SYMLINK libspdk_lvol.so 00:03:46.818 SYMLINK libspdk_blobfs.so 00:03:47.082 CC lib/nvmf/ctrlr.o 00:03:47.082 CC lib/nvmf/ctrlr_discovery.o 00:03:47.082 CC lib/nvmf/ctrlr_bdev.o 00:03:47.082 CC lib/ublk/ublk.o 00:03:47.082 CC lib/nvmf/subsystem.o 00:03:47.082 CC lib/ublk/ublk_rpc.o 00:03:47.082 CC lib/nvmf/nvmf.o 00:03:47.082 CC lib/ftl/ftl_core.o 00:03:47.082 CC lib/nvmf/nvmf_rpc.o 00:03:47.083 CC lib/ftl/ftl_init.o 00:03:47.083 CC lib/scsi/dev.o 00:03:47.083 CC lib/nbd/nbd.o 00:03:47.083 CC lib/ftl/ftl_layout.o 00:03:47.083 CC lib/nvmf/transport.o 00:03:47.083 CC lib/scsi/lun.o 00:03:47.083 CC lib/ftl/ftl_debug.o 00:03:47.083 CC lib/ftl/ftl_io.o 00:03:47.083 CC lib/nvmf/stubs.o 00:03:47.083 CC lib/scsi/port.o 00:03:47.083 CC lib/scsi/scsi.o 00:03:47.083 CC lib/nbd/nbd_rpc.o 00:03:47.083 CC lib/nvmf/tcp.o 00:03:47.083 CC lib/ftl/ftl_sb.o 00:03:47.083 CC lib/nvmf/mdns_server.o 00:03:47.083 CC lib/ftl/ftl_l2p.o 00:03:47.083 CC lib/scsi/scsi_bdev.o 00:03:47.083 CC lib/ftl/ftl_l2p_flat.o 00:03:47.083 CC lib/nvmf/vfio_user.o 00:03:47.083 CC lib/ftl/ftl_nv_cache.o 00:03:47.083 CC lib/scsi/scsi_pr.o 00:03:47.083 CC lib/nvmf/rdma.o 00:03:47.083 CC lib/scsi/scsi_rpc.o 00:03:47.083 CC lib/nvmf/auth.o 00:03:47.083 CC lib/ftl/ftl_band.o 00:03:47.083 CC lib/ftl/ftl_band_ops.o 00:03:47.083 CC lib/scsi/task.o 00:03:47.083 CC lib/ftl/ftl_writer.o 00:03:47.083 CC lib/ftl/ftl_rq.o 00:03:47.083 CC lib/ftl/ftl_reloc.o 00:03:47.083 CC lib/ftl/ftl_l2p_cache.o 00:03:47.083 CC lib/ftl/ftl_p2l.o 00:03:47.083 CC lib/ftl/ftl_p2l_log.o 00:03:47.083 CC lib/ftl/mngt/ftl_mngt.o 00:03:47.083 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:47.083 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:47.083 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:47.083 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:47.083 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:47.083 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:47.083 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:47.083 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:47.083 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:47.083 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:47.083 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:47.083 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:47.083 CC lib/ftl/utils/ftl_conf.o 00:03:47.083 CC lib/ftl/utils/ftl_mempool.o 00:03:47.083 CC lib/ftl/utils/ftl_md.o 00:03:47.083 CC lib/ftl/utils/ftl_bitmap.o 00:03:47.083 CC lib/ftl/utils/ftl_property.o 00:03:47.083 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:47.083 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:47.083 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:47.083 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:47.083 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:47.083 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:47.083 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:47.083 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:47.083 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:47.083 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:47.083 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:47.083 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:47.083 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:47.083 CC lib/ftl/base/ftl_base_dev.o 00:03:47.083 CC lib/ftl/base/ftl_base_bdev.o 00:03:47.083 CC lib/ftl/ftl_trace.o 00:03:47.659 LIB libspdk_nbd.a 00:03:47.659 LIB libspdk_scsi.a 00:03:47.920 SO libspdk_nbd.so.7.0 00:03:47.920 SO libspdk_scsi.so.9.0 00:03:47.920 SYMLINK libspdk_nbd.so 00:03:47.920 LIB libspdk_ublk.a 00:03:47.920 SYMLINK libspdk_scsi.so 00:03:47.920 SO libspdk_ublk.so.3.0 00:03:48.182 SYMLINK libspdk_ublk.so 00:03:48.182 LIB libspdk_ftl.a 00:03:48.182 CC lib/vhost/vhost.o 00:03:48.182 CC lib/vhost/vhost_rpc.o 00:03:48.182 CC lib/vhost/vhost_scsi.o 00:03:48.182 CC lib/vhost/vhost_blk.o 00:03:48.182 CC lib/vhost/rte_vhost_user.o 00:03:48.182 CC lib/iscsi/conn.o 00:03:48.182 CC lib/iscsi/init_grp.o 00:03:48.182 CC lib/iscsi/iscsi.o 00:03:48.182 CC lib/iscsi/param.o 00:03:48.182 CC lib/iscsi/portal_grp.o 00:03:48.182 CC lib/iscsi/tgt_node.o 00:03:48.182 CC lib/iscsi/iscsi_subsystem.o 00:03:48.182 CC lib/iscsi/iscsi_rpc.o 00:03:48.182 CC lib/iscsi/task.o 00:03:48.443 SO libspdk_ftl.so.9.0 00:03:48.705 SYMLINK libspdk_ftl.so 00:03:49.278 LIB libspdk_nvmf.a 00:03:49.278 SO libspdk_nvmf.so.19.0 00:03:49.278 LIB libspdk_vhost.a 00:03:49.278 SO libspdk_vhost.so.8.0 00:03:49.539 SYMLINK libspdk_vhost.so 00:03:49.539 SYMLINK libspdk_nvmf.so 00:03:49.539 LIB libspdk_iscsi.a 00:03:49.539 SO libspdk_iscsi.so.8.0 00:03:49.801 SYMLINK libspdk_iscsi.so 00:03:50.374 CC module/env_dpdk/env_dpdk_rpc.o 00:03:50.374 CC module/vfu_device/vfu_virtio.o 00:03:50.374 CC module/vfu_device/vfu_virtio_blk.o 00:03:50.374 CC module/vfu_device/vfu_virtio_scsi.o 00:03:50.374 CC module/vfu_device/vfu_virtio_rpc.o 00:03:50.374 CC module/vfu_device/vfu_virtio_fs.o 00:03:50.374 LIB libspdk_env_dpdk_rpc.a 00:03:50.374 CC module/sock/posix/posix.o 00:03:50.374 CC module/blob/bdev/blob_bdev.o 00:03:50.636 CC module/accel/dsa/accel_dsa.o 00:03:50.636 CC module/accel/dsa/accel_dsa_rpc.o 00:03:50.636 CC module/accel/error/accel_error.o 00:03:50.636 CC module/accel/error/accel_error_rpc.o 00:03:50.636 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:50.636 CC module/accel/ioat/accel_ioat.o 00:03:50.636 CC module/keyring/linux/keyring.o 00:03:50.636 CC module/accel/ioat/accel_ioat_rpc.o 00:03:50.636 CC module/keyring/linux/keyring_rpc.o 00:03:50.636 CC module/fsdev/aio/fsdev_aio.o 00:03:50.636 CC module/accel/iaa/accel_iaa.o 00:03:50.636 CC module/scheduler/gscheduler/gscheduler.o 00:03:50.636 CC module/accel/iaa/accel_iaa_rpc.o 00:03:50.636 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:50.636 CC module/keyring/file/keyring.o 00:03:50.636 CC module/fsdev/aio/linux_aio_mgr.o 00:03:50.636 CC module/keyring/file/keyring_rpc.o 00:03:50.636 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:50.636 SO libspdk_env_dpdk_rpc.so.6.0 00:03:50.636 SYMLINK libspdk_env_dpdk_rpc.so 00:03:50.636 LIB libspdk_keyring_file.a 00:03:50.636 LIB libspdk_keyring_linux.a 00:03:50.636 LIB libspdk_scheduler_dpdk_governor.a 00:03:50.636 LIB libspdk_scheduler_gscheduler.a 00:03:50.636 LIB libspdk_accel_error.a 00:03:50.898 SO libspdk_keyring_file.so.2.0 00:03:50.898 LIB libspdk_accel_ioat.a 00:03:50.898 SO libspdk_keyring_linux.so.1.0 00:03:50.898 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:50.898 LIB libspdk_scheduler_dynamic.a 00:03:50.898 LIB libspdk_accel_iaa.a 00:03:50.898 SO libspdk_scheduler_gscheduler.so.4.0 00:03:50.898 SO libspdk_accel_error.so.2.0 00:03:50.898 LIB libspdk_blob_bdev.a 00:03:50.898 SO libspdk_scheduler_dynamic.so.4.0 00:03:50.898 SO libspdk_accel_ioat.so.6.0 00:03:50.898 LIB libspdk_accel_dsa.a 00:03:50.898 SO libspdk_accel_iaa.so.3.0 00:03:50.898 SYMLINK libspdk_keyring_file.so 00:03:50.898 SYMLINK libspdk_scheduler_gscheduler.so 00:03:50.898 SYMLINK libspdk_keyring_linux.so 00:03:50.898 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:50.898 SO libspdk_blob_bdev.so.11.0 00:03:50.898 SYMLINK libspdk_accel_error.so 00:03:50.898 SO libspdk_accel_dsa.so.5.0 00:03:50.898 SYMLINK libspdk_accel_iaa.so 00:03:50.898 SYMLINK libspdk_scheduler_dynamic.so 00:03:50.898 SYMLINK libspdk_accel_ioat.so 00:03:50.898 SYMLINK libspdk_blob_bdev.so 00:03:50.898 SYMLINK libspdk_accel_dsa.so 00:03:50.898 LIB libspdk_vfu_device.a 00:03:50.898 SO libspdk_vfu_device.so.3.0 00:03:51.160 SYMLINK libspdk_vfu_device.so 00:03:51.160 LIB libspdk_fsdev_aio.a 00:03:51.160 SO libspdk_fsdev_aio.so.1.0 00:03:51.160 LIB libspdk_sock_posix.a 00:03:51.160 SO libspdk_sock_posix.so.6.0 00:03:51.420 SYMLINK libspdk_fsdev_aio.so 00:03:51.420 SYMLINK libspdk_sock_posix.so 00:03:51.420 CC module/bdev/lvol/vbdev_lvol.o 00:03:51.420 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:51.420 CC module/bdev/error/vbdev_error.o 00:03:51.420 CC module/bdev/error/vbdev_error_rpc.o 00:03:51.420 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:51.420 CC module/bdev/delay/vbdev_delay.o 00:03:51.420 CC module/bdev/gpt/gpt.o 00:03:51.420 CC module/bdev/passthru/vbdev_passthru.o 00:03:51.420 CC module/bdev/gpt/vbdev_gpt.o 00:03:51.420 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:51.420 CC module/blobfs/bdev/blobfs_bdev.o 00:03:51.420 CC module/bdev/null/bdev_null.o 00:03:51.420 CC module/bdev/null/bdev_null_rpc.o 00:03:51.420 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:51.420 CC module/bdev/malloc/bdev_malloc.o 00:03:51.420 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:51.420 CC module/bdev/raid/bdev_raid.o 00:03:51.420 CC module/bdev/raid/bdev_raid_rpc.o 00:03:51.420 CC module/bdev/raid/bdev_raid_sb.o 00:03:51.420 CC module/bdev/raid/raid0.o 00:03:51.420 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:51.420 CC module/bdev/aio/bdev_aio.o 00:03:51.420 CC module/bdev/nvme/bdev_nvme.o 00:03:51.420 CC module/bdev/raid/raid1.o 00:03:51.420 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:51.420 CC module/bdev/split/vbdev_split.o 00:03:51.420 CC module/bdev/aio/bdev_aio_rpc.o 00:03:51.420 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:51.421 CC module/bdev/ftl/bdev_ftl.o 00:03:51.421 CC module/bdev/raid/concat.o 00:03:51.421 CC module/bdev/nvme/nvme_rpc.o 00:03:51.421 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:51.421 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:51.421 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:51.421 CC module/bdev/split/vbdev_split_rpc.o 00:03:51.421 CC module/bdev/nvme/bdev_mdns_client.o 00:03:51.421 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:51.421 CC module/bdev/nvme/vbdev_opal.o 00:03:51.421 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:51.421 CC module/bdev/iscsi/bdev_iscsi.o 00:03:51.421 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:51.421 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:51.681 LIB libspdk_blobfs_bdev.a 00:03:51.943 SO libspdk_blobfs_bdev.so.6.0 00:03:51.943 LIB libspdk_bdev_split.a 00:03:51.943 LIB libspdk_bdev_null.a 00:03:51.943 LIB libspdk_bdev_gpt.a 00:03:51.943 SO libspdk_bdev_split.so.6.0 00:03:51.943 LIB libspdk_bdev_error.a 00:03:51.943 SYMLINK libspdk_blobfs_bdev.so 00:03:51.943 SO libspdk_bdev_null.so.6.0 00:03:51.943 LIB libspdk_bdev_passthru.a 00:03:51.943 SO libspdk_bdev_gpt.so.6.0 00:03:51.943 LIB libspdk_bdev_ftl.a 00:03:51.943 SO libspdk_bdev_error.so.6.0 00:03:51.943 SO libspdk_bdev_passthru.so.6.0 00:03:51.943 SYMLINK libspdk_bdev_split.so 00:03:51.943 LIB libspdk_bdev_malloc.a 00:03:51.943 LIB libspdk_bdev_zone_block.a 00:03:51.943 SO libspdk_bdev_ftl.so.6.0 00:03:51.943 SYMLINK libspdk_bdev_null.so 00:03:51.943 LIB libspdk_bdev_aio.a 00:03:51.943 SYMLINK libspdk_bdev_gpt.so 00:03:51.943 LIB libspdk_bdev_iscsi.a 00:03:51.943 SYMLINK libspdk_bdev_error.so 00:03:51.943 LIB libspdk_bdev_delay.a 00:03:51.943 SO libspdk_bdev_malloc.so.6.0 00:03:51.943 SO libspdk_bdev_zone_block.so.6.0 00:03:51.943 SO libspdk_bdev_aio.so.6.0 00:03:51.943 SYMLINK libspdk_bdev_passthru.so 00:03:51.943 SO libspdk_bdev_iscsi.so.6.0 00:03:51.943 SO libspdk_bdev_delay.so.6.0 00:03:51.943 SYMLINK libspdk_bdev_ftl.so 00:03:51.943 SYMLINK libspdk_bdev_malloc.so 00:03:51.943 LIB libspdk_bdev_lvol.a 00:03:52.204 SYMLINK libspdk_bdev_zone_block.so 00:03:52.204 SYMLINK libspdk_bdev_aio.so 00:03:52.204 SYMLINK libspdk_bdev_iscsi.so 00:03:52.204 SYMLINK libspdk_bdev_delay.so 00:03:52.204 SO libspdk_bdev_lvol.so.6.0 00:03:52.204 LIB libspdk_bdev_virtio.a 00:03:52.204 SO libspdk_bdev_virtio.so.6.0 00:03:52.204 SYMLINK libspdk_bdev_lvol.so 00:03:52.204 SYMLINK libspdk_bdev_virtio.so 00:03:52.465 LIB libspdk_bdev_raid.a 00:03:52.465 SO libspdk_bdev_raid.so.6.0 00:03:52.727 SYMLINK libspdk_bdev_raid.so 00:03:53.669 LIB libspdk_bdev_nvme.a 00:03:53.669 SO libspdk_bdev_nvme.so.7.0 00:03:53.930 SYMLINK libspdk_bdev_nvme.so 00:03:54.501 CC module/event/subsystems/iobuf/iobuf.o 00:03:54.501 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:54.501 CC module/event/subsystems/sock/sock.o 00:03:54.501 CC module/event/subsystems/vmd/vmd.o 00:03:54.501 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:54.501 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:54.501 CC module/event/subsystems/keyring/keyring.o 00:03:54.501 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:54.501 CC module/event/subsystems/fsdev/fsdev.o 00:03:54.501 CC module/event/subsystems/scheduler/scheduler.o 00:03:54.762 LIB libspdk_event_vhost_blk.a 00:03:54.762 LIB libspdk_event_scheduler.a 00:03:54.762 LIB libspdk_event_keyring.a 00:03:54.762 LIB libspdk_event_vmd.a 00:03:54.762 LIB libspdk_event_sock.a 00:03:54.762 LIB libspdk_event_vfu_tgt.a 00:03:54.762 LIB libspdk_event_fsdev.a 00:03:54.762 LIB libspdk_event_iobuf.a 00:03:54.762 SO libspdk_event_scheduler.so.4.0 00:03:54.762 SO libspdk_event_vhost_blk.so.3.0 00:03:54.762 SO libspdk_event_vfu_tgt.so.3.0 00:03:54.762 SO libspdk_event_vmd.so.6.0 00:03:54.762 SO libspdk_event_fsdev.so.1.0 00:03:54.762 SO libspdk_event_keyring.so.1.0 00:03:54.762 SO libspdk_event_sock.so.5.0 00:03:54.762 SO libspdk_event_iobuf.so.3.0 00:03:54.762 SYMLINK libspdk_event_vhost_blk.so 00:03:54.762 SYMLINK libspdk_event_scheduler.so 00:03:54.762 SYMLINK libspdk_event_fsdev.so 00:03:54.762 SYMLINK libspdk_event_vfu_tgt.so 00:03:54.762 SYMLINK libspdk_event_keyring.so 00:03:54.762 SYMLINK libspdk_event_sock.so 00:03:54.762 SYMLINK libspdk_event_vmd.so 00:03:54.762 SYMLINK libspdk_event_iobuf.so 00:03:55.333 CC module/event/subsystems/accel/accel.o 00:03:55.333 LIB libspdk_event_accel.a 00:03:55.333 SO libspdk_event_accel.so.6.0 00:03:55.594 SYMLINK libspdk_event_accel.so 00:03:55.855 CC module/event/subsystems/bdev/bdev.o 00:03:56.116 LIB libspdk_event_bdev.a 00:03:56.116 SO libspdk_event_bdev.so.6.0 00:03:56.116 SYMLINK libspdk_event_bdev.so 00:03:56.378 CC module/event/subsystems/scsi/scsi.o 00:03:56.639 CC module/event/subsystems/ublk/ublk.o 00:03:56.639 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:56.639 CC module/event/subsystems/nbd/nbd.o 00:03:56.639 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:56.639 LIB libspdk_event_nbd.a 00:03:56.639 LIB libspdk_event_scsi.a 00:03:56.639 LIB libspdk_event_ublk.a 00:03:56.639 SO libspdk_event_nbd.so.6.0 00:03:56.639 SO libspdk_event_scsi.so.6.0 00:03:56.639 SO libspdk_event_ublk.so.3.0 00:03:56.639 LIB libspdk_event_nvmf.a 00:03:56.900 SYMLINK libspdk_event_scsi.so 00:03:56.900 SYMLINK libspdk_event_nbd.so 00:03:56.900 SYMLINK libspdk_event_ublk.so 00:03:56.900 SO libspdk_event_nvmf.so.6.0 00:03:56.900 SYMLINK libspdk_event_nvmf.so 00:03:57.162 CC module/event/subsystems/iscsi/iscsi.o 00:03:57.162 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:57.423 LIB libspdk_event_vhost_scsi.a 00:03:57.423 LIB libspdk_event_iscsi.a 00:03:57.423 SO libspdk_event_vhost_scsi.so.3.0 00:03:57.423 SO libspdk_event_iscsi.so.6.0 00:03:57.423 SYMLINK libspdk_event_vhost_scsi.so 00:03:57.423 SYMLINK libspdk_event_iscsi.so 00:03:57.685 SO libspdk.so.6.0 00:03:57.685 SYMLINK libspdk.so 00:03:57.946 CC app/trace_record/trace_record.o 00:03:57.946 CXX app/trace/trace.o 00:03:57.946 TEST_HEADER include/spdk/accel_module.h 00:03:57.946 TEST_HEADER include/spdk/accel.h 00:03:57.946 CC test/rpc_client/rpc_client_test.o 00:03:57.946 CC app/spdk_nvme_perf/perf.o 00:03:57.946 TEST_HEADER include/spdk/barrier.h 00:03:57.946 TEST_HEADER include/spdk/assert.h 00:03:57.946 CC app/spdk_nvme_identify/identify.o 00:03:57.946 TEST_HEADER include/spdk/base64.h 00:03:57.946 TEST_HEADER include/spdk/bdev.h 00:03:57.946 CC app/spdk_nvme_discover/discovery_aer.o 00:03:57.946 TEST_HEADER include/spdk/bdev_module.h 00:03:57.946 TEST_HEADER include/spdk/bdev_zone.h 00:03:57.946 CC app/spdk_lspci/spdk_lspci.o 00:03:57.946 TEST_HEADER include/spdk/bit_array.h 00:03:57.946 TEST_HEADER include/spdk/blob_bdev.h 00:03:57.946 TEST_HEADER include/spdk/bit_pool.h 00:03:57.946 CC app/spdk_top/spdk_top.o 00:03:57.946 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:58.210 TEST_HEADER include/spdk/blob.h 00:03:58.210 TEST_HEADER include/spdk/blobfs.h 00:03:58.210 TEST_HEADER include/spdk/conf.h 00:03:58.210 TEST_HEADER include/spdk/cpuset.h 00:03:58.210 TEST_HEADER include/spdk/config.h 00:03:58.210 TEST_HEADER include/spdk/crc16.h 00:03:58.210 TEST_HEADER include/spdk/crc32.h 00:03:58.210 TEST_HEADER include/spdk/crc64.h 00:03:58.210 TEST_HEADER include/spdk/dif.h 00:03:58.210 TEST_HEADER include/spdk/dma.h 00:03:58.210 TEST_HEADER include/spdk/endian.h 00:03:58.210 TEST_HEADER include/spdk/env_dpdk.h 00:03:58.210 TEST_HEADER include/spdk/event.h 00:03:58.210 TEST_HEADER include/spdk/env.h 00:03:58.210 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:58.210 TEST_HEADER include/spdk/fd_group.h 00:03:58.210 TEST_HEADER include/spdk/fd.h 00:03:58.210 TEST_HEADER include/spdk/fsdev.h 00:03:58.210 TEST_HEADER include/spdk/file.h 00:03:58.210 TEST_HEADER include/spdk/fsdev_module.h 00:03:58.210 TEST_HEADER include/spdk/ftl.h 00:03:58.210 TEST_HEADER include/spdk/gpt_spec.h 00:03:58.210 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:58.210 TEST_HEADER include/spdk/hexlify.h 00:03:58.210 TEST_HEADER include/spdk/histogram_data.h 00:03:58.210 TEST_HEADER include/spdk/idxd.h 00:03:58.210 TEST_HEADER include/spdk/idxd_spec.h 00:03:58.210 TEST_HEADER include/spdk/init.h 00:03:58.210 TEST_HEADER include/spdk/ioat.h 00:03:58.210 TEST_HEADER include/spdk/iscsi_spec.h 00:03:58.210 TEST_HEADER include/spdk/ioat_spec.h 00:03:58.210 TEST_HEADER include/spdk/json.h 00:03:58.210 TEST_HEADER include/spdk/jsonrpc.h 00:03:58.210 CC app/spdk_dd/spdk_dd.o 00:03:58.210 TEST_HEADER include/spdk/keyring.h 00:03:58.210 TEST_HEADER include/spdk/keyring_module.h 00:03:58.210 TEST_HEADER include/spdk/likely.h 00:03:58.210 TEST_HEADER include/spdk/log.h 00:03:58.210 CC app/iscsi_tgt/iscsi_tgt.o 00:03:58.210 TEST_HEADER include/spdk/lvol.h 00:03:58.210 TEST_HEADER include/spdk/md5.h 00:03:58.210 TEST_HEADER include/spdk/memory.h 00:03:58.210 TEST_HEADER include/spdk/mmio.h 00:03:58.210 CC app/nvmf_tgt/nvmf_main.o 00:03:58.210 TEST_HEADER include/spdk/nbd.h 00:03:58.210 TEST_HEADER include/spdk/net.h 00:03:58.210 TEST_HEADER include/spdk/notify.h 00:03:58.210 TEST_HEADER include/spdk/nvme.h 00:03:58.210 TEST_HEADER include/spdk/nvme_intel.h 00:03:58.210 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:58.210 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:58.210 TEST_HEADER include/spdk/nvme_spec.h 00:03:58.210 TEST_HEADER include/spdk/nvme_zns.h 00:03:58.210 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:58.210 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:58.210 TEST_HEADER include/spdk/nvmf.h 00:03:58.210 TEST_HEADER include/spdk/nvmf_transport.h 00:03:58.210 TEST_HEADER include/spdk/nvmf_spec.h 00:03:58.210 TEST_HEADER include/spdk/pci_ids.h 00:03:58.210 TEST_HEADER include/spdk/opal.h 00:03:58.210 TEST_HEADER include/spdk/opal_spec.h 00:03:58.210 TEST_HEADER include/spdk/pipe.h 00:03:58.210 TEST_HEADER include/spdk/queue.h 00:03:58.210 TEST_HEADER include/spdk/reduce.h 00:03:58.210 CC app/spdk_tgt/spdk_tgt.o 00:03:58.210 TEST_HEADER include/spdk/rpc.h 00:03:58.210 TEST_HEADER include/spdk/scheduler.h 00:03:58.210 TEST_HEADER include/spdk/scsi_spec.h 00:03:58.210 TEST_HEADER include/spdk/scsi.h 00:03:58.210 TEST_HEADER include/spdk/stdinc.h 00:03:58.210 TEST_HEADER include/spdk/string.h 00:03:58.210 TEST_HEADER include/spdk/sock.h 00:03:58.210 TEST_HEADER include/spdk/thread.h 00:03:58.210 TEST_HEADER include/spdk/trace.h 00:03:58.210 TEST_HEADER include/spdk/trace_parser.h 00:03:58.210 TEST_HEADER include/spdk/tree.h 00:03:58.210 TEST_HEADER include/spdk/util.h 00:03:58.210 TEST_HEADER include/spdk/ublk.h 00:03:58.210 TEST_HEADER include/spdk/uuid.h 00:03:58.210 TEST_HEADER include/spdk/version.h 00:03:58.210 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:58.210 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:58.210 TEST_HEADER include/spdk/vhost.h 00:03:58.210 TEST_HEADER include/spdk/vmd.h 00:03:58.210 CXX test/cpp_headers/accel.o 00:03:58.210 TEST_HEADER include/spdk/xor.h 00:03:58.210 TEST_HEADER include/spdk/zipf.h 00:03:58.210 CXX test/cpp_headers/accel_module.o 00:03:58.210 CXX test/cpp_headers/barrier.o 00:03:58.210 CXX test/cpp_headers/assert.o 00:03:58.210 CXX test/cpp_headers/base64.o 00:03:58.210 CXX test/cpp_headers/bdev.o 00:03:58.210 CXX test/cpp_headers/bdev_module.o 00:03:58.210 CXX test/cpp_headers/bdev_zone.o 00:03:58.210 CXX test/cpp_headers/bit_array.o 00:03:58.210 CXX test/cpp_headers/bit_pool.o 00:03:58.210 CXX test/cpp_headers/blob_bdev.o 00:03:58.210 CXX test/cpp_headers/blobfs_bdev.o 00:03:58.210 CXX test/cpp_headers/blobfs.o 00:03:58.210 CXX test/cpp_headers/blob.o 00:03:58.210 CXX test/cpp_headers/config.o 00:03:58.210 CXX test/cpp_headers/conf.o 00:03:58.210 CXX test/cpp_headers/crc16.o 00:03:58.210 CXX test/cpp_headers/cpuset.o 00:03:58.210 CXX test/cpp_headers/crc32.o 00:03:58.210 CXX test/cpp_headers/crc64.o 00:03:58.210 CXX test/cpp_headers/dma.o 00:03:58.210 CXX test/cpp_headers/endian.o 00:03:58.210 CXX test/cpp_headers/env_dpdk.o 00:03:58.210 CXX test/cpp_headers/dif.o 00:03:58.210 CXX test/cpp_headers/env.o 00:03:58.210 CXX test/cpp_headers/event.o 00:03:58.210 CXX test/cpp_headers/fd.o 00:03:58.210 CXX test/cpp_headers/fd_group.o 00:03:58.210 CXX test/cpp_headers/file.o 00:03:58.210 CXX test/cpp_headers/fsdev.o 00:03:58.210 CXX test/cpp_headers/fsdev_module.o 00:03:58.210 CXX test/cpp_headers/ftl.o 00:03:58.210 CXX test/cpp_headers/fuse_dispatcher.o 00:03:58.210 CXX test/cpp_headers/histogram_data.o 00:03:58.210 CXX test/cpp_headers/idxd.o 00:03:58.210 CXX test/cpp_headers/gpt_spec.o 00:03:58.210 CXX test/cpp_headers/hexlify.o 00:03:58.210 CXX test/cpp_headers/init.o 00:03:58.210 CXX test/cpp_headers/idxd_spec.o 00:03:58.210 CXX test/cpp_headers/ioat_spec.o 00:03:58.210 CXX test/cpp_headers/ioat.o 00:03:58.210 CXX test/cpp_headers/iscsi_spec.o 00:03:58.210 CXX test/cpp_headers/json.o 00:03:58.210 CXX test/cpp_headers/keyring.o 00:03:58.210 CXX test/cpp_headers/jsonrpc.o 00:03:58.210 CXX test/cpp_headers/keyring_module.o 00:03:58.210 CXX test/cpp_headers/likely.o 00:03:58.210 CXX test/cpp_headers/md5.o 00:03:58.210 CXX test/cpp_headers/log.o 00:03:58.210 CXX test/cpp_headers/lvol.o 00:03:58.210 CXX test/cpp_headers/memory.o 00:03:58.210 CXX test/cpp_headers/notify.o 00:03:58.210 CXX test/cpp_headers/nbd.o 00:03:58.210 CXX test/cpp_headers/nvme_intel.o 00:03:58.210 CXX test/cpp_headers/net.o 00:03:58.210 CXX test/cpp_headers/mmio.o 00:03:58.210 CXX test/cpp_headers/nvme.o 00:03:58.210 CXX test/cpp_headers/nvme_ocssd.o 00:03:58.210 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:58.210 CXX test/cpp_headers/nvme_spec.o 00:03:58.210 CXX test/cpp_headers/nvme_zns.o 00:03:58.210 CXX test/cpp_headers/nvmf_cmd.o 00:03:58.210 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:58.210 CXX test/cpp_headers/nvmf_spec.o 00:03:58.210 CXX test/cpp_headers/nvmf_transport.o 00:03:58.210 CXX test/cpp_headers/nvmf.o 00:03:58.210 CXX test/cpp_headers/opal_spec.o 00:03:58.210 CXX test/cpp_headers/pipe.o 00:03:58.210 CXX test/cpp_headers/opal.o 00:03:58.210 CXX test/cpp_headers/pci_ids.o 00:03:58.210 CXX test/cpp_headers/queue.o 00:03:58.210 CXX test/cpp_headers/rpc.o 00:03:58.210 CXX test/cpp_headers/reduce.o 00:03:58.210 CXX test/cpp_headers/scheduler.o 00:03:58.210 CXX test/cpp_headers/scsi_spec.o 00:03:58.210 CXX test/cpp_headers/scsi.o 00:03:58.210 CXX test/cpp_headers/sock.o 00:03:58.210 CXX test/cpp_headers/stdinc.o 00:03:58.210 CXX test/cpp_headers/string.o 00:03:58.210 CXX test/cpp_headers/thread.o 00:03:58.210 CXX test/cpp_headers/tree.o 00:03:58.210 CXX test/cpp_headers/trace.o 00:03:58.475 CXX test/cpp_headers/trace_parser.o 00:03:58.475 CXX test/cpp_headers/vfio_user_pci.o 00:03:58.475 CXX test/cpp_headers/ublk.o 00:03:58.475 CXX test/cpp_headers/util.o 00:03:58.475 CXX test/cpp_headers/version.o 00:03:58.475 CXX test/cpp_headers/vhost.o 00:03:58.475 CXX test/cpp_headers/vfio_user_spec.o 00:03:58.475 CXX test/cpp_headers/xor.o 00:03:58.475 CXX test/cpp_headers/uuid.o 00:03:58.475 CXX test/cpp_headers/zipf.o 00:03:58.475 CC test/app/histogram_perf/histogram_perf.o 00:03:58.475 CXX test/cpp_headers/vmd.o 00:03:58.475 CC test/env/memory/memory_ut.o 00:03:58.475 LINK spdk_lspci 00:03:58.475 CC test/thread/poller_perf/poller_perf.o 00:03:58.475 CC test/app/jsoncat/jsoncat.o 00:03:58.475 CC examples/util/zipf/zipf.o 00:03:58.475 CC app/fio/nvme/fio_plugin.o 00:03:58.475 CC examples/ioat/verify/verify.o 00:03:58.475 CC examples/ioat/perf/perf.o 00:03:58.475 CC test/app/stub/stub.o 00:03:58.475 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:58.475 CC test/app/bdev_svc/bdev_svc.o 00:03:58.475 CC test/env/pci/pci_ut.o 00:03:58.475 CC test/env/vtophys/vtophys.o 00:03:58.475 CC test/dma/test_dma/test_dma.o 00:03:58.475 CC app/fio/bdev/fio_plugin.o 00:03:58.740 LINK spdk_nvme_discover 00:03:58.740 LINK interrupt_tgt 00:03:58.740 LINK rpc_client_test 00:03:58.740 LINK nvmf_tgt 00:03:58.740 LINK iscsi_tgt 00:03:59.002 LINK spdk_trace_record 00:03:59.002 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:59.002 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:59.002 LINK spdk_tgt 00:03:59.002 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:59.002 LINK spdk_trace 00:03:59.002 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:59.261 CC test/env/mem_callbacks/mem_callbacks.o 00:03:59.261 LINK spdk_dd 00:03:59.261 LINK zipf 00:03:59.521 LINK env_dpdk_post_init 00:03:59.521 LINK vtophys 00:03:59.521 LINK stub 00:03:59.521 LINK bdev_svc 00:03:59.521 LINK verify 00:03:59.521 LINK ioat_perf 00:03:59.521 LINK poller_perf 00:03:59.521 LINK jsoncat 00:03:59.521 LINK histogram_perf 00:03:59.521 LINK pci_ut 00:03:59.521 CC app/vhost/vhost.o 00:03:59.781 LINK spdk_nvme_identify 00:03:59.781 LINK vhost_fuzz 00:03:59.781 LINK nvme_fuzz 00:03:59.781 LINK spdk_nvme_perf 00:03:59.781 LINK vhost 00:03:59.781 LINK spdk_nvme 00:03:59.781 CC examples/sock/hello_world/hello_sock.o 00:03:59.781 LINK test_dma 00:04:00.042 CC examples/vmd/led/led.o 00:04:00.042 CC examples/vmd/lsvmd/lsvmd.o 00:04:00.042 LINK mem_callbacks 00:04:00.042 CC examples/idxd/perf/perf.o 00:04:00.042 LINK spdk_bdev 00:04:00.042 CC examples/thread/thread/thread_ex.o 00:04:00.042 CC test/event/event_perf/event_perf.o 00:04:00.042 CC test/event/reactor/reactor.o 00:04:00.042 CC test/event/reactor_perf/reactor_perf.o 00:04:00.042 CC test/event/app_repeat/app_repeat.o 00:04:00.042 CC test/event/scheduler/scheduler.o 00:04:00.042 LINK lsvmd 00:04:00.042 LINK spdk_top 00:04:00.042 LINK led 00:04:00.302 LINK hello_sock 00:04:00.302 LINK reactor 00:04:00.302 LINK event_perf 00:04:00.302 LINK reactor_perf 00:04:00.302 LINK app_repeat 00:04:00.302 LINK thread 00:04:00.302 LINK idxd_perf 00:04:00.302 LINK scheduler 00:04:00.563 LINK memory_ut 00:04:00.563 CC test/nvme/sgl/sgl.o 00:04:00.563 CC test/nvme/overhead/overhead.o 00:04:00.563 CC test/nvme/startup/startup.o 00:04:00.563 CC test/nvme/reserve/reserve.o 00:04:00.563 CC test/nvme/reset/reset.o 00:04:00.563 CC test/nvme/aer/aer.o 00:04:00.563 CC test/nvme/e2edp/nvme_dp.o 00:04:00.563 CC test/nvme/fused_ordering/fused_ordering.o 00:04:00.563 CC test/nvme/cuse/cuse.o 00:04:00.563 CC test/nvme/compliance/nvme_compliance.o 00:04:00.563 CC test/nvme/boot_partition/boot_partition.o 00:04:00.563 CC test/nvme/simple_copy/simple_copy.o 00:04:00.563 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:00.563 CC test/nvme/err_injection/err_injection.o 00:04:00.563 CC test/nvme/connect_stress/connect_stress.o 00:04:00.563 CC test/accel/dif/dif.o 00:04:00.563 CC test/nvme/fdp/fdp.o 00:04:00.563 CC test/blobfs/mkfs/mkfs.o 00:04:00.824 CC test/lvol/esnap/esnap.o 00:04:00.824 LINK iscsi_fuzz 00:04:00.824 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:00.824 LINK startup 00:04:00.824 LINK boot_partition 00:04:00.824 CC examples/nvme/abort/abort.o 00:04:00.824 CC examples/nvme/hello_world/hello_world.o 00:04:00.824 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:00.824 LINK connect_stress 00:04:00.824 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:00.824 CC examples/nvme/arbitration/arbitration.o 00:04:00.824 CC examples/nvme/reconnect/reconnect.o 00:04:00.824 LINK err_injection 00:04:00.824 CC examples/nvme/hotplug/hotplug.o 00:04:00.824 LINK fused_ordering 00:04:00.824 LINK doorbell_aers 00:04:00.824 LINK reserve 00:04:00.824 LINK simple_copy 00:04:00.824 LINK sgl 00:04:00.824 LINK mkfs 00:04:00.824 LINK reset 00:04:00.824 LINK nvme_dp 00:04:00.824 LINK overhead 00:04:00.824 LINK aer 00:04:00.824 CC examples/accel/perf/accel_perf.o 00:04:00.824 LINK fdp 00:04:00.824 LINK nvme_compliance 00:04:00.824 CC examples/blob/cli/blobcli.o 00:04:00.824 CC examples/blob/hello_world/hello_blob.o 00:04:01.085 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:01.085 LINK cmb_copy 00:04:01.085 LINK pmr_persistence 00:04:01.085 LINK hello_world 00:04:01.085 LINK hotplug 00:04:01.085 LINK abort 00:04:01.085 LINK arbitration 00:04:01.085 LINK reconnect 00:04:01.085 LINK hello_blob 00:04:01.345 LINK dif 00:04:01.345 LINK hello_fsdev 00:04:01.345 LINK nvme_manage 00:04:01.345 LINK accel_perf 00:04:01.345 LINK blobcli 00:04:01.917 LINK cuse 00:04:01.917 CC test/bdev/bdevio/bdevio.o 00:04:01.917 CC examples/bdev/hello_world/hello_bdev.o 00:04:01.917 CC examples/bdev/bdevperf/bdevperf.o 00:04:02.178 LINK bdevio 00:04:02.178 LINK hello_bdev 00:04:02.749 LINK bdevperf 00:04:03.321 CC examples/nvmf/nvmf/nvmf.o 00:04:03.581 LINK nvmf 00:04:05.564 LINK esnap 00:04:05.564 00:04:05.564 real 0m54.604s 00:04:05.564 user 6m36.196s 00:04:05.564 sys 4m21.335s 00:04:05.564 14:00:09 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:05.564 14:00:09 make -- common/autotest_common.sh@10 -- $ set +x 00:04:05.564 ************************************ 00:04:05.564 END TEST make 00:04:05.564 ************************************ 00:04:05.564 14:00:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:05.564 14:00:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:05.564 14:00:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:05.564 14:00:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.564 14:00:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:05.564 14:00:09 -- pm/common@44 -- $ pid=1351455 00:04:05.564 14:00:09 -- pm/common@50 -- $ kill -TERM 1351455 00:04:05.564 14:00:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.564 14:00:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:05.564 14:00:09 -- pm/common@44 -- $ pid=1351456 00:04:05.564 14:00:09 -- pm/common@50 -- $ kill -TERM 1351456 00:04:05.564 14:00:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.564 14:00:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:05.564 14:00:09 -- pm/common@44 -- $ pid=1351458 00:04:05.564 14:00:09 -- pm/common@50 -- $ kill -TERM 1351458 00:04:05.564 14:00:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.564 14:00:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:05.564 14:00:09 -- pm/common@44 -- $ pid=1351481 00:04:05.564 14:00:09 -- pm/common@50 -- $ sudo -E kill -TERM 1351481 00:04:05.826 14:00:09 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:05.826 14:00:09 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:05.826 14:00:09 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:05.826 14:00:09 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:05.826 14:00:09 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.826 14:00:09 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.826 14:00:09 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.826 14:00:09 -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.826 14:00:09 -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.826 14:00:09 -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.826 14:00:09 -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.826 14:00:09 -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.826 14:00:09 -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.826 14:00:09 -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.826 14:00:09 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.826 14:00:09 -- scripts/common.sh@344 -- # case "$op" in 00:04:05.826 14:00:09 -- scripts/common.sh@345 -- # : 1 00:04:05.826 14:00:09 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.826 14:00:09 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.826 14:00:09 -- scripts/common.sh@365 -- # decimal 1 00:04:05.826 14:00:09 -- scripts/common.sh@353 -- # local d=1 00:04:05.826 14:00:09 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.826 14:00:09 -- scripts/common.sh@355 -- # echo 1 00:04:05.826 14:00:09 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.826 14:00:09 -- scripts/common.sh@366 -- # decimal 2 00:04:05.826 14:00:09 -- scripts/common.sh@353 -- # local d=2 00:04:05.826 14:00:09 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.826 14:00:09 -- scripts/common.sh@355 -- # echo 2 00:04:05.826 14:00:09 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.826 14:00:09 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.826 14:00:09 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.826 14:00:09 -- scripts/common.sh@368 -- # return 0 00:04:05.826 14:00:09 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.826 14:00:09 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:05.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.826 --rc genhtml_branch_coverage=1 00:04:05.826 --rc genhtml_function_coverage=1 00:04:05.826 --rc genhtml_legend=1 00:04:05.826 --rc geninfo_all_blocks=1 00:04:05.826 --rc geninfo_unexecuted_blocks=1 00:04:05.826 00:04:05.826 ' 00:04:05.826 14:00:09 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:05.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.826 --rc genhtml_branch_coverage=1 00:04:05.826 --rc genhtml_function_coverage=1 00:04:05.826 --rc genhtml_legend=1 00:04:05.826 --rc geninfo_all_blocks=1 00:04:05.826 --rc geninfo_unexecuted_blocks=1 00:04:05.826 00:04:05.826 ' 00:04:05.826 14:00:09 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:05.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.826 --rc genhtml_branch_coverage=1 00:04:05.826 --rc genhtml_function_coverage=1 00:04:05.826 --rc genhtml_legend=1 00:04:05.826 --rc geninfo_all_blocks=1 00:04:05.826 --rc geninfo_unexecuted_blocks=1 00:04:05.826 00:04:05.826 ' 00:04:05.826 14:00:09 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:05.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.826 --rc genhtml_branch_coverage=1 00:04:05.826 --rc genhtml_function_coverage=1 00:04:05.826 --rc genhtml_legend=1 00:04:05.826 --rc geninfo_all_blocks=1 00:04:05.826 --rc geninfo_unexecuted_blocks=1 00:04:05.826 00:04:05.826 ' 00:04:05.826 14:00:09 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:05.826 14:00:09 -- nvmf/common.sh@7 -- # uname -s 00:04:05.826 14:00:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:05.826 14:00:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:05.826 14:00:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:05.826 14:00:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:05.826 14:00:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:05.826 14:00:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:05.826 14:00:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:05.826 14:00:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:05.826 14:00:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:05.826 14:00:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:05.826 14:00:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:05.826 14:00:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:05.826 14:00:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:05.826 14:00:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:05.826 14:00:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:05.826 14:00:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:05.826 14:00:09 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:05.826 14:00:09 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:05.826 14:00:09 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:05.826 14:00:09 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:05.826 14:00:09 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:05.827 14:00:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.827 14:00:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.827 14:00:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.827 14:00:09 -- paths/export.sh@5 -- # export PATH 00:04:05.827 14:00:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.827 14:00:09 -- nvmf/common.sh@51 -- # : 0 00:04:05.827 14:00:09 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:05.827 14:00:09 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:05.827 14:00:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:05.827 14:00:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:05.827 14:00:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:05.827 14:00:09 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:05.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:05.827 14:00:09 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:05.827 14:00:09 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:05.827 14:00:09 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:05.827 14:00:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:05.827 14:00:09 -- spdk/autotest.sh@32 -- # uname -s 00:04:05.827 14:00:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:05.827 14:00:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:05.827 14:00:09 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:05.827 14:00:09 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:05.827 14:00:09 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:05.827 14:00:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:05.827 14:00:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:05.827 14:00:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:05.827 14:00:09 -- spdk/autotest.sh@48 -- # udevadm_pid=1434107 00:04:05.827 14:00:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:05.827 14:00:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:05.827 14:00:09 -- pm/common@17 -- # local monitor 00:04:05.827 14:00:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.827 14:00:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.827 14:00:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.827 14:00:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:05.827 14:00:09 -- pm/common@21 -- # date +%s 00:04:05.827 14:00:09 -- pm/common@25 -- # sleep 1 00:04:05.827 14:00:09 -- pm/common@21 -- # date +%s 00:04:05.827 14:00:09 -- pm/common@21 -- # date +%s 00:04:05.827 14:00:09 -- pm/common@21 -- # date +%s 00:04:05.827 14:00:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728820809 00:04:05.827 14:00:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728820809 00:04:05.827 14:00:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728820809 00:04:05.827 14:00:09 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728820809 00:04:05.827 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728820809_collect-cpu-load.pm.log 00:04:05.827 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728820809_collect-vmstat.pm.log 00:04:05.827 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728820809_collect-cpu-temp.pm.log 00:04:06.088 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728820809_collect-bmc-pm.bmc.pm.log 00:04:07.031 14:00:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:07.031 14:00:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:07.031 14:00:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:07.031 14:00:10 -- common/autotest_common.sh@10 -- # set +x 00:04:07.031 14:00:10 -- spdk/autotest.sh@59 -- # create_test_list 00:04:07.031 14:00:10 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:07.031 14:00:10 -- common/autotest_common.sh@10 -- # set +x 00:04:07.031 14:00:10 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:07.031 14:00:10 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:07.031 14:00:10 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:07.031 14:00:10 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:07.031 14:00:10 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:07.031 14:00:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:07.031 14:00:10 -- common/autotest_common.sh@1455 -- # uname 00:04:07.031 14:00:10 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:07.031 14:00:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:07.031 14:00:10 -- common/autotest_common.sh@1475 -- # uname 00:04:07.031 14:00:10 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:07.031 14:00:10 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:07.031 14:00:10 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:07.031 lcov: LCOV version 1.15 00:04:07.031 14:00:10 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:21.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:21.943 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:40.056 14:00:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:40.056 14:00:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:40.056 14:00:40 -- common/autotest_common.sh@10 -- # set +x 00:04:40.056 14:00:40 -- spdk/autotest.sh@78 -- # rm -f 00:04:40.056 14:00:40 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:40.627 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:40.627 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:40.627 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:40.627 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:40.887 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:40.887 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:40.887 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:40.887 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:40.887 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:40.887 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:40.887 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:40.887 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:40.887 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:40.887 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:41.147 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:41.147 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:41.147 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:41.407 14:00:44 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:41.407 14:00:44 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:41.407 14:00:44 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:41.407 14:00:44 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:41.407 14:00:44 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:41.407 14:00:44 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:41.407 14:00:44 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:41.407 14:00:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:41.407 14:00:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:41.407 14:00:44 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:41.407 14:00:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:41.407 14:00:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:41.407 14:00:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:41.407 14:00:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:41.407 14:00:44 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:41.407 No valid GPT data, bailing 00:04:41.407 14:00:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:41.407 14:00:44 -- scripts/common.sh@394 -- # pt= 00:04:41.407 14:00:44 -- scripts/common.sh@395 -- # return 1 00:04:41.407 14:00:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:41.407 1+0 records in 00:04:41.407 1+0 records out 00:04:41.408 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00187922 s, 558 MB/s 00:04:41.408 14:00:45 -- spdk/autotest.sh@105 -- # sync 00:04:41.408 14:00:45 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:41.408 14:00:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:41.408 14:00:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:51.416 14:00:53 -- spdk/autotest.sh@111 -- # uname -s 00:04:51.416 14:00:53 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:51.416 14:00:53 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:51.416 14:00:53 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:53.964 Hugepages 00:04:53.964 node hugesize free / total 00:04:53.964 node0 1048576kB 0 / 0 00:04:53.964 node0 2048kB 0 / 0 00:04:53.964 node1 1048576kB 0 / 0 00:04:53.964 node1 2048kB 0 / 0 00:04:53.964 00:04:53.964 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:53.964 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:53.964 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:53.964 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:53.964 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:53.964 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:53.964 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:53.964 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:53.964 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:53.964 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:53.964 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:53.964 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:53.964 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:53.964 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:53.964 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:53.964 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:53.964 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:53.964 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:53.964 14:00:57 -- spdk/autotest.sh@117 -- # uname -s 00:04:53.964 14:00:57 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:53.964 14:00:57 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:53.964 14:00:57 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.267 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:57.267 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:57.267 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:57.267 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:57.267 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:57.528 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:57.528 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:57.528 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:57.528 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:57.528 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:57.528 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:57.528 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:57.528 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:57.528 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:57.528 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:57.528 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:59.442 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:59.702 14:01:03 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:00.644 14:01:04 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:00.645 14:01:04 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:00.645 14:01:04 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:00.645 14:01:04 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:00.645 14:01:04 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:00.645 14:01:04 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:00.645 14:01:04 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:00.645 14:01:04 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:00.645 14:01:04 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:00.645 14:01:04 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:00.645 14:01:04 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:05:00.645 14:01:04 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:04.850 Waiting for block devices as requested 00:05:04.850 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:04.850 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:04.850 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:04.850 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:04.850 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:04.850 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:04.850 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:04.850 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:04.850 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:05.111 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:05.111 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:05.372 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:05.372 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:05.372 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:05.633 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:05.633 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:05.633 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:06.204 14:01:09 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:06.204 14:01:09 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:06.204 14:01:09 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:05:06.204 14:01:09 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:05:06.204 14:01:09 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:06.204 14:01:09 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:06.204 14:01:09 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:06.204 14:01:09 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:06.204 14:01:09 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:06.204 14:01:09 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:06.204 14:01:09 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:06.204 14:01:09 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:06.204 14:01:09 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:06.204 14:01:09 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:05:06.204 14:01:09 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:06.204 14:01:09 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:06.204 14:01:09 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:06.204 14:01:09 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:06.204 14:01:09 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:06.204 14:01:09 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:06.204 14:01:09 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:06.204 14:01:09 -- common/autotest_common.sh@1541 -- # continue 00:05:06.204 14:01:09 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:06.204 14:01:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.204 14:01:09 -- common/autotest_common.sh@10 -- # set +x 00:05:06.204 14:01:09 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:06.204 14:01:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:06.204 14:01:09 -- common/autotest_common.sh@10 -- # set +x 00:05:06.204 14:01:09 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:10.411 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:10.411 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:10.411 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:10.411 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:10.411 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:10.411 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:10.411 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:10.411 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:10.411 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:10.411 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:10.411 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:10.411 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:10.411 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:10.411 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:10.411 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:10.411 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:10.411 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:10.411 14:01:13 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:10.411 14:01:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.411 14:01:13 -- common/autotest_common.sh@10 -- # set +x 00:05:10.411 14:01:13 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:10.411 14:01:13 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:10.411 14:01:13 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:10.411 14:01:13 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:10.411 14:01:13 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:10.411 14:01:13 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:10.411 14:01:13 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:10.411 14:01:13 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:10.411 14:01:13 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:10.411 14:01:13 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:10.411 14:01:13 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:10.411 14:01:13 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:10.411 14:01:13 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:10.411 14:01:14 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:10.411 14:01:14 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:05:10.411 14:01:14 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:10.411 14:01:14 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:10.411 14:01:14 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:05:10.411 14:01:14 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:10.411 14:01:14 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:10.411 14:01:14 -- common/autotest_common.sh@1570 -- # return 0 00:05:10.411 14:01:14 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:10.411 14:01:14 -- common/autotest_common.sh@1578 -- # return 0 00:05:10.411 14:01:14 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:10.411 14:01:14 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:10.411 14:01:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:10.412 14:01:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:10.412 14:01:14 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:10.412 14:01:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:10.412 14:01:14 -- common/autotest_common.sh@10 -- # set +x 00:05:10.412 14:01:14 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:10.412 14:01:14 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:10.412 14:01:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.412 14:01:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.412 14:01:14 -- common/autotest_common.sh@10 -- # set +x 00:05:10.412 ************************************ 00:05:10.412 START TEST env 00:05:10.412 ************************************ 00:05:10.412 14:01:14 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:10.673 * Looking for test storage... 00:05:10.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:10.673 14:01:14 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:10.673 14:01:14 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:10.673 14:01:14 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:10.673 14:01:14 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:10.673 14:01:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.673 14:01:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.673 14:01:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.673 14:01:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.673 14:01:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.673 14:01:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.673 14:01:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.673 14:01:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.673 14:01:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.673 14:01:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.673 14:01:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.673 14:01:14 env -- scripts/common.sh@344 -- # case "$op" in 00:05:10.673 14:01:14 env -- scripts/common.sh@345 -- # : 1 00:05:10.673 14:01:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.673 14:01:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.673 14:01:14 env -- scripts/common.sh@365 -- # decimal 1 00:05:10.673 14:01:14 env -- scripts/common.sh@353 -- # local d=1 00:05:10.673 14:01:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.673 14:01:14 env -- scripts/common.sh@355 -- # echo 1 00:05:10.673 14:01:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.673 14:01:14 env -- scripts/common.sh@366 -- # decimal 2 00:05:10.673 14:01:14 env -- scripts/common.sh@353 -- # local d=2 00:05:10.673 14:01:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.673 14:01:14 env -- scripts/common.sh@355 -- # echo 2 00:05:10.673 14:01:14 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.673 14:01:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.673 14:01:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.673 14:01:14 env -- scripts/common.sh@368 -- # return 0 00:05:10.673 14:01:14 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.673 14:01:14 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:10.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.673 --rc genhtml_branch_coverage=1 00:05:10.673 --rc genhtml_function_coverage=1 00:05:10.673 --rc genhtml_legend=1 00:05:10.673 --rc geninfo_all_blocks=1 00:05:10.673 --rc geninfo_unexecuted_blocks=1 00:05:10.673 00:05:10.673 ' 00:05:10.673 14:01:14 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:10.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.673 --rc genhtml_branch_coverage=1 00:05:10.673 --rc genhtml_function_coverage=1 00:05:10.673 --rc genhtml_legend=1 00:05:10.673 --rc geninfo_all_blocks=1 00:05:10.673 --rc geninfo_unexecuted_blocks=1 00:05:10.673 00:05:10.673 ' 00:05:10.673 14:01:14 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:10.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.673 --rc genhtml_branch_coverage=1 00:05:10.673 --rc genhtml_function_coverage=1 00:05:10.673 --rc genhtml_legend=1 00:05:10.673 --rc geninfo_all_blocks=1 00:05:10.673 --rc geninfo_unexecuted_blocks=1 00:05:10.673 00:05:10.673 ' 00:05:10.673 14:01:14 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:10.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.673 --rc genhtml_branch_coverage=1 00:05:10.673 --rc genhtml_function_coverage=1 00:05:10.673 --rc genhtml_legend=1 00:05:10.673 --rc geninfo_all_blocks=1 00:05:10.673 --rc geninfo_unexecuted_blocks=1 00:05:10.673 00:05:10.673 ' 00:05:10.673 14:01:14 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:10.673 14:01:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.673 14:01:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.673 14:01:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.673 ************************************ 00:05:10.673 START TEST env_memory 00:05:10.673 ************************************ 00:05:10.674 14:01:14 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:10.674 00:05:10.674 00:05:10.674 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.674 http://cunit.sourceforge.net/ 00:05:10.674 00:05:10.674 00:05:10.674 Suite: memory 00:05:10.936 Test: alloc and free memory map ...[2024-10-13 14:01:14.393051] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:10.936 passed 00:05:10.936 Test: mem map translation ...[2024-10-13 14:01:14.418634] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:10.936 [2024-10-13 14:01:14.418669] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:10.936 [2024-10-13 14:01:14.418714] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:10.936 [2024-10-13 14:01:14.418722] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:10.936 passed 00:05:10.936 Test: mem map registration ...[2024-10-13 14:01:14.473954] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:10.936 [2024-10-13 14:01:14.473998] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:10.936 passed 00:05:10.936 Test: mem map adjacent registrations ...passed 00:05:10.936 00:05:10.936 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.936 suites 1 1 n/a 0 0 00:05:10.936 tests 4 4 4 0 0 00:05:10.936 asserts 152 152 152 0 n/a 00:05:10.936 00:05:10.936 Elapsed time = 0.192 seconds 00:05:10.936 00:05:10.936 real 0m0.207s 00:05:10.936 user 0m0.196s 00:05:10.936 sys 0m0.010s 00:05:10.936 14:01:14 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.936 14:01:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:10.936 ************************************ 00:05:10.936 END TEST env_memory 00:05:10.936 ************************************ 00:05:10.936 14:01:14 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:10.936 14:01:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.936 14:01:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.936 14:01:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.936 ************************************ 00:05:10.936 START TEST env_vtophys 00:05:10.936 ************************************ 00:05:10.936 14:01:14 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:11.198 EAL: lib.eal log level changed from notice to debug 00:05:11.198 EAL: Detected lcore 0 as core 0 on socket 0 00:05:11.198 EAL: Detected lcore 1 as core 1 on socket 0 00:05:11.198 EAL: Detected lcore 2 as core 2 on socket 0 00:05:11.198 EAL: Detected lcore 3 as core 3 on socket 0 00:05:11.198 EAL: Detected lcore 4 as core 4 on socket 0 00:05:11.198 EAL: Detected lcore 5 as core 5 on socket 0 00:05:11.198 EAL: Detected lcore 6 as core 6 on socket 0 00:05:11.198 EAL: Detected lcore 7 as core 7 on socket 0 00:05:11.198 EAL: Detected lcore 8 as core 8 on socket 0 00:05:11.199 EAL: Detected lcore 9 as core 9 on socket 0 00:05:11.199 EAL: Detected lcore 10 as core 10 on socket 0 00:05:11.199 EAL: Detected lcore 11 as core 11 on socket 0 00:05:11.199 EAL: Detected lcore 12 as core 12 on socket 0 00:05:11.199 EAL: Detected lcore 13 as core 13 on socket 0 00:05:11.199 EAL: Detected lcore 14 as core 14 on socket 0 00:05:11.199 EAL: Detected lcore 15 as core 15 on socket 0 00:05:11.199 EAL: Detected lcore 16 as core 16 on socket 0 00:05:11.199 EAL: Detected lcore 17 as core 17 on socket 0 00:05:11.199 EAL: Detected lcore 18 as core 18 on socket 0 00:05:11.199 EAL: Detected lcore 19 as core 19 on socket 0 00:05:11.199 EAL: Detected lcore 20 as core 20 on socket 0 00:05:11.199 EAL: Detected lcore 21 as core 21 on socket 0 00:05:11.199 EAL: Detected lcore 22 as core 22 on socket 0 00:05:11.199 EAL: Detected lcore 23 as core 23 on socket 0 00:05:11.199 EAL: Detected lcore 24 as core 24 on socket 0 00:05:11.199 EAL: Detected lcore 25 as core 25 on socket 0 00:05:11.199 EAL: Detected lcore 26 as core 26 on socket 0 00:05:11.199 EAL: Detected lcore 27 as core 27 on socket 0 00:05:11.199 EAL: Detected lcore 28 as core 28 on socket 0 00:05:11.199 EAL: Detected lcore 29 as core 29 on socket 0 00:05:11.199 EAL: Detected lcore 30 as core 30 on socket 0 00:05:11.199 EAL: Detected lcore 31 as core 31 on socket 0 00:05:11.199 EAL: Detected lcore 32 as core 32 on socket 0 00:05:11.199 EAL: Detected lcore 33 as core 33 on socket 0 00:05:11.199 EAL: Detected lcore 34 as core 34 on socket 0 00:05:11.199 EAL: Detected lcore 35 as core 35 on socket 0 00:05:11.199 EAL: Detected lcore 36 as core 0 on socket 1 00:05:11.199 EAL: Detected lcore 37 as core 1 on socket 1 00:05:11.199 EAL: Detected lcore 38 as core 2 on socket 1 00:05:11.199 EAL: Detected lcore 39 as core 3 on socket 1 00:05:11.199 EAL: Detected lcore 40 as core 4 on socket 1 00:05:11.199 EAL: Detected lcore 41 as core 5 on socket 1 00:05:11.199 EAL: Detected lcore 42 as core 6 on socket 1 00:05:11.199 EAL: Detected lcore 43 as core 7 on socket 1 00:05:11.199 EAL: Detected lcore 44 as core 8 on socket 1 00:05:11.199 EAL: Detected lcore 45 as core 9 on socket 1 00:05:11.199 EAL: Detected lcore 46 as core 10 on socket 1 00:05:11.199 EAL: Detected lcore 47 as core 11 on socket 1 00:05:11.199 EAL: Detected lcore 48 as core 12 on socket 1 00:05:11.199 EAL: Detected lcore 49 as core 13 on socket 1 00:05:11.199 EAL: Detected lcore 50 as core 14 on socket 1 00:05:11.199 EAL: Detected lcore 51 as core 15 on socket 1 00:05:11.199 EAL: Detected lcore 52 as core 16 on socket 1 00:05:11.199 EAL: Detected lcore 53 as core 17 on socket 1 00:05:11.199 EAL: Detected lcore 54 as core 18 on socket 1 00:05:11.199 EAL: Detected lcore 55 as core 19 on socket 1 00:05:11.199 EAL: Detected lcore 56 as core 20 on socket 1 00:05:11.199 EAL: Detected lcore 57 as core 21 on socket 1 00:05:11.199 EAL: Detected lcore 58 as core 22 on socket 1 00:05:11.199 EAL: Detected lcore 59 as core 23 on socket 1 00:05:11.199 EAL: Detected lcore 60 as core 24 on socket 1 00:05:11.199 EAL: Detected lcore 61 as core 25 on socket 1 00:05:11.199 EAL: Detected lcore 62 as core 26 on socket 1 00:05:11.199 EAL: Detected lcore 63 as core 27 on socket 1 00:05:11.199 EAL: Detected lcore 64 as core 28 on socket 1 00:05:11.199 EAL: Detected lcore 65 as core 29 on socket 1 00:05:11.199 EAL: Detected lcore 66 as core 30 on socket 1 00:05:11.199 EAL: Detected lcore 67 as core 31 on socket 1 00:05:11.199 EAL: Detected lcore 68 as core 32 on socket 1 00:05:11.199 EAL: Detected lcore 69 as core 33 on socket 1 00:05:11.199 EAL: Detected lcore 70 as core 34 on socket 1 00:05:11.199 EAL: Detected lcore 71 as core 35 on socket 1 00:05:11.199 EAL: Detected lcore 72 as core 0 on socket 0 00:05:11.199 EAL: Detected lcore 73 as core 1 on socket 0 00:05:11.199 EAL: Detected lcore 74 as core 2 on socket 0 00:05:11.199 EAL: Detected lcore 75 as core 3 on socket 0 00:05:11.199 EAL: Detected lcore 76 as core 4 on socket 0 00:05:11.199 EAL: Detected lcore 77 as core 5 on socket 0 00:05:11.199 EAL: Detected lcore 78 as core 6 on socket 0 00:05:11.199 EAL: Detected lcore 79 as core 7 on socket 0 00:05:11.199 EAL: Detected lcore 80 as core 8 on socket 0 00:05:11.199 EAL: Detected lcore 81 as core 9 on socket 0 00:05:11.199 EAL: Detected lcore 82 as core 10 on socket 0 00:05:11.199 EAL: Detected lcore 83 as core 11 on socket 0 00:05:11.199 EAL: Detected lcore 84 as core 12 on socket 0 00:05:11.199 EAL: Detected lcore 85 as core 13 on socket 0 00:05:11.199 EAL: Detected lcore 86 as core 14 on socket 0 00:05:11.199 EAL: Detected lcore 87 as core 15 on socket 0 00:05:11.199 EAL: Detected lcore 88 as core 16 on socket 0 00:05:11.199 EAL: Detected lcore 89 as core 17 on socket 0 00:05:11.199 EAL: Detected lcore 90 as core 18 on socket 0 00:05:11.199 EAL: Detected lcore 91 as core 19 on socket 0 00:05:11.199 EAL: Detected lcore 92 as core 20 on socket 0 00:05:11.199 EAL: Detected lcore 93 as core 21 on socket 0 00:05:11.199 EAL: Detected lcore 94 as core 22 on socket 0 00:05:11.199 EAL: Detected lcore 95 as core 23 on socket 0 00:05:11.199 EAL: Detected lcore 96 as core 24 on socket 0 00:05:11.199 EAL: Detected lcore 97 as core 25 on socket 0 00:05:11.199 EAL: Detected lcore 98 as core 26 on socket 0 00:05:11.199 EAL: Detected lcore 99 as core 27 on socket 0 00:05:11.199 EAL: Detected lcore 100 as core 28 on socket 0 00:05:11.199 EAL: Detected lcore 101 as core 29 on socket 0 00:05:11.199 EAL: Detected lcore 102 as core 30 on socket 0 00:05:11.199 EAL: Detected lcore 103 as core 31 on socket 0 00:05:11.199 EAL: Detected lcore 104 as core 32 on socket 0 00:05:11.199 EAL: Detected lcore 105 as core 33 on socket 0 00:05:11.199 EAL: Detected lcore 106 as core 34 on socket 0 00:05:11.199 EAL: Detected lcore 107 as core 35 on socket 0 00:05:11.199 EAL: Detected lcore 108 as core 0 on socket 1 00:05:11.199 EAL: Detected lcore 109 as core 1 on socket 1 00:05:11.199 EAL: Detected lcore 110 as core 2 on socket 1 00:05:11.199 EAL: Detected lcore 111 as core 3 on socket 1 00:05:11.199 EAL: Detected lcore 112 as core 4 on socket 1 00:05:11.199 EAL: Detected lcore 113 as core 5 on socket 1 00:05:11.199 EAL: Detected lcore 114 as core 6 on socket 1 00:05:11.199 EAL: Detected lcore 115 as core 7 on socket 1 00:05:11.199 EAL: Detected lcore 116 as core 8 on socket 1 00:05:11.199 EAL: Detected lcore 117 as core 9 on socket 1 00:05:11.199 EAL: Detected lcore 118 as core 10 on socket 1 00:05:11.199 EAL: Detected lcore 119 as core 11 on socket 1 00:05:11.199 EAL: Detected lcore 120 as core 12 on socket 1 00:05:11.199 EAL: Detected lcore 121 as core 13 on socket 1 00:05:11.199 EAL: Detected lcore 122 as core 14 on socket 1 00:05:11.199 EAL: Detected lcore 123 as core 15 on socket 1 00:05:11.199 EAL: Detected lcore 124 as core 16 on socket 1 00:05:11.199 EAL: Detected lcore 125 as core 17 on socket 1 00:05:11.199 EAL: Detected lcore 126 as core 18 on socket 1 00:05:11.199 EAL: Detected lcore 127 as core 19 on socket 1 00:05:11.199 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:11.199 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:11.199 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:11.199 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:11.199 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:11.199 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:11.199 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:11.199 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:11.199 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:11.199 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:11.199 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:11.199 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:11.199 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:11.199 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:11.199 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:11.199 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:11.199 EAL: Maximum logical cores by configuration: 128 00:05:11.199 EAL: Detected CPU lcores: 128 00:05:11.199 EAL: Detected NUMA nodes: 2 00:05:11.199 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:05:11.199 EAL: Detected shared linkage of DPDK 00:05:11.199 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:05:11.199 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:05:11.199 EAL: Registered [vdev] bus. 00:05:11.199 EAL: bus.vdev log level changed from disabled to notice 00:05:11.199 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:05:11.200 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:05:11.200 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:11.200 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:11.200 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:05:11.200 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:05:11.200 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:05:11.200 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:05:11.200 EAL: No shared files mode enabled, IPC will be disabled 00:05:11.200 EAL: No shared files mode enabled, IPC is disabled 00:05:11.200 EAL: Bus pci wants IOVA as 'DC' 00:05:11.200 EAL: Bus vdev wants IOVA as 'DC' 00:05:11.200 EAL: Buses did not request a specific IOVA mode. 00:05:11.200 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:11.200 EAL: Selected IOVA mode 'VA' 00:05:11.200 EAL: Probing VFIO support... 00:05:11.200 EAL: IOMMU type 1 (Type 1) is supported 00:05:11.200 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:11.200 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:11.200 EAL: VFIO support initialized 00:05:11.200 EAL: Ask a virtual area of 0x2e000 bytes 00:05:11.200 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:11.200 EAL: Setting up physically contiguous memory... 00:05:11.200 EAL: Setting maximum number of open files to 524288 00:05:11.200 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:11.200 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:11.200 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:11.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.200 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:11.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.200 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:11.200 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:11.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.200 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:11.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.200 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:11.200 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:11.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.200 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:11.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.200 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:11.200 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:11.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.200 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:11.200 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.200 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:11.200 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:11.200 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:11.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.200 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:11.200 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.200 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:11.200 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:11.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.200 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:11.200 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.200 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:11.200 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:11.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.200 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:11.200 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.200 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:11.200 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:11.200 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.200 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:11.200 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:11.200 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.200 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:11.200 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:11.200 EAL: Hugepages will be freed exactly as allocated. 00:05:11.200 EAL: No shared files mode enabled, IPC is disabled 00:05:11.200 EAL: No shared files mode enabled, IPC is disabled 00:05:11.200 EAL: Refined arch frequency 2400000000 to measured frequency 2394368565 00:05:11.200 EAL: TSC frequency is ~2394400 KHz 00:05:11.200 EAL: Main lcore 0 is ready (tid=7fb5983f1a00;cpuset=[0]) 00:05:11.200 EAL: Trying to obtain current memory policy. 00:05:11.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.200 EAL: Restoring previous memory policy: 0 00:05:11.200 EAL: request: mp_malloc_sync 00:05:11.200 EAL: No shared files mode enabled, IPC is disabled 00:05:11.200 EAL: Heap on socket 0 was expanded by 2MB 00:05:11.200 EAL: No shared files mode enabled, IPC is disabled 00:05:11.200 EAL: No shared files mode enabled, IPC is disabled 00:05:11.200 EAL: Mem event callback 'spdk:(nil)' registered 00:05:11.200 00:05:11.200 00:05:11.200 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.200 http://cunit.sourceforge.net/ 00:05:11.200 00:05:11.200 00:05:11.200 Suite: components_suite 00:05:11.200 Test: vtophys_malloc_test ...passed 00:05:11.200 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:11.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.200 EAL: Restoring previous memory policy: 4 00:05:11.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.200 EAL: request: mp_malloc_sync 00:05:11.200 EAL: No shared files mode enabled, IPC is disabled 00:05:11.200 EAL: Heap on socket 0 was expanded by 4MB 00:05:11.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.200 EAL: request: mp_malloc_sync 00:05:11.200 EAL: No shared files mode enabled, IPC is disabled 00:05:11.200 EAL: Heap on socket 0 was shrunk by 4MB 00:05:11.200 EAL: Trying to obtain current memory policy. 00:05:11.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.200 EAL: Restoring previous memory policy: 4 00:05:11.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.200 EAL: request: mp_malloc_sync 00:05:11.200 EAL: No shared files mode enabled, IPC is disabled 00:05:11.200 EAL: Heap on socket 0 was expanded by 6MB 00:05:11.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.200 EAL: request: mp_malloc_sync 00:05:11.200 EAL: No shared files mode enabled, IPC is disabled 00:05:11.200 EAL: Heap on socket 0 was shrunk by 6MB 00:05:11.200 EAL: Trying to obtain current memory policy. 00:05:11.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.200 EAL: Restoring previous memory policy: 4 00:05:11.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.200 EAL: request: mp_malloc_sync 00:05:11.200 EAL: No shared files mode enabled, IPC is disabled 00:05:11.200 EAL: Heap on socket 0 was expanded by 10MB 00:05:11.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.200 EAL: request: mp_malloc_sync 00:05:11.200 EAL: No shared files mode enabled, IPC is disabled 00:05:11.200 EAL: Heap on socket 0 was shrunk by 10MB 00:05:11.200 EAL: Trying to obtain current memory policy. 00:05:11.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.200 EAL: Restoring previous memory policy: 4 00:05:11.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.200 EAL: request: mp_malloc_sync 00:05:11.200 EAL: No shared files mode enabled, IPC is disabled 00:05:11.200 EAL: Heap on socket 0 was expanded by 18MB 00:05:11.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.200 EAL: request: mp_malloc_sync 00:05:11.200 EAL: No shared files mode enabled, IPC is disabled 00:05:11.200 EAL: Heap on socket 0 was shrunk by 18MB 00:05:11.200 EAL: Trying to obtain current memory policy. 00:05:11.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.200 EAL: Restoring previous memory policy: 4 00:05:11.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.200 EAL: request: mp_malloc_sync 00:05:11.200 EAL: No shared files mode enabled, IPC is disabled 00:05:11.200 EAL: Heap on socket 0 was expanded by 34MB 00:05:11.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.201 EAL: request: mp_malloc_sync 00:05:11.201 EAL: No shared files mode enabled, IPC is disabled 00:05:11.201 EAL: Heap on socket 0 was shrunk by 34MB 00:05:11.201 EAL: Trying to obtain current memory policy. 00:05:11.201 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.201 EAL: Restoring previous memory policy: 4 00:05:11.201 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.201 EAL: request: mp_malloc_sync 00:05:11.201 EAL: No shared files mode enabled, IPC is disabled 00:05:11.201 EAL: Heap on socket 0 was expanded by 66MB 00:05:11.201 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.201 EAL: request: mp_malloc_sync 00:05:11.201 EAL: No shared files mode enabled, IPC is disabled 00:05:11.201 EAL: Heap on socket 0 was shrunk by 66MB 00:05:11.201 EAL: Trying to obtain current memory policy. 00:05:11.201 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.462 EAL: Restoring previous memory policy: 4 00:05:11.462 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.462 EAL: request: mp_malloc_sync 00:05:11.462 EAL: No shared files mode enabled, IPC is disabled 00:05:11.462 EAL: Heap on socket 0 was expanded by 130MB 00:05:11.462 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.462 EAL: request: mp_malloc_sync 00:05:11.462 EAL: No shared files mode enabled, IPC is disabled 00:05:11.462 EAL: Heap on socket 0 was shrunk by 130MB 00:05:11.462 EAL: Trying to obtain current memory policy. 00:05:11.462 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.462 EAL: Restoring previous memory policy: 4 00:05:11.462 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.462 EAL: request: mp_malloc_sync 00:05:11.462 EAL: No shared files mode enabled, IPC is disabled 00:05:11.462 EAL: Heap on socket 0 was expanded by 258MB 00:05:11.462 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.462 EAL: request: mp_malloc_sync 00:05:11.462 EAL: No shared files mode enabled, IPC is disabled 00:05:11.462 EAL: Heap on socket 0 was shrunk by 258MB 00:05:11.462 EAL: Trying to obtain current memory policy. 00:05:11.462 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.462 EAL: Restoring previous memory policy: 4 00:05:11.462 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.462 EAL: request: mp_malloc_sync 00:05:11.462 EAL: No shared files mode enabled, IPC is disabled 00:05:11.462 EAL: Heap on socket 0 was expanded by 514MB 00:05:11.462 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.723 EAL: request: mp_malloc_sync 00:05:11.723 EAL: No shared files mode enabled, IPC is disabled 00:05:11.723 EAL: Heap on socket 0 was shrunk by 514MB 00:05:11.723 EAL: Trying to obtain current memory policy. 00:05:11.723 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.723 EAL: Restoring previous memory policy: 4 00:05:11.723 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.723 EAL: request: mp_malloc_sync 00:05:11.723 EAL: No shared files mode enabled, IPC is disabled 00:05:11.723 EAL: Heap on socket 0 was expanded by 1026MB 00:05:11.985 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.985 EAL: request: mp_malloc_sync 00:05:11.985 EAL: No shared files mode enabled, IPC is disabled 00:05:11.985 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:11.985 passed 00:05:11.985 00:05:11.985 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.985 suites 1 1 n/a 0 0 00:05:11.985 tests 2 2 2 0 0 00:05:11.985 asserts 497 497 497 0 n/a 00:05:11.985 00:05:11.985 Elapsed time = 0.688 seconds 00:05:11.985 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.985 EAL: request: mp_malloc_sync 00:05:11.985 EAL: No shared files mode enabled, IPC is disabled 00:05:11.985 EAL: Heap on socket 0 was shrunk by 2MB 00:05:11.985 EAL: No shared files mode enabled, IPC is disabled 00:05:11.985 EAL: No shared files mode enabled, IPC is disabled 00:05:11.985 EAL: No shared files mode enabled, IPC is disabled 00:05:11.985 00:05:11.985 real 0m0.934s 00:05:11.985 user 0m0.422s 00:05:11.985 sys 0m0.380s 00:05:11.985 14:01:15 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.985 14:01:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:11.985 ************************************ 00:05:11.985 END TEST env_vtophys 00:05:11.985 ************************************ 00:05:11.985 14:01:15 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:11.985 14:01:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.985 14:01:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.985 14:01:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.985 ************************************ 00:05:11.985 START TEST env_pci 00:05:11.985 ************************************ 00:05:11.985 14:01:15 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:11.985 00:05:11.985 00:05:11.985 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.985 http://cunit.sourceforge.net/ 00:05:11.985 00:05:11.985 00:05:11.985 Suite: pci 00:05:11.985 Test: pci_hook ...[2024-10-13 14:01:15.658491] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1454279 has claimed it 00:05:11.985 EAL: Cannot find device (10000:00:01.0) 00:05:11.985 EAL: Failed to attach device on primary process 00:05:11.985 passed 00:05:11.985 00:05:11.985 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.985 suites 1 1 n/a 0 0 00:05:11.985 tests 1 1 1 0 0 00:05:11.985 asserts 25 25 25 0 n/a 00:05:11.985 00:05:11.985 Elapsed time = 0.032 seconds 00:05:12.247 00:05:12.247 real 0m0.052s 00:05:12.247 user 0m0.014s 00:05:12.247 sys 0m0.038s 00:05:12.247 14:01:15 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.247 14:01:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:12.247 ************************************ 00:05:12.247 END TEST env_pci 00:05:12.247 ************************************ 00:05:12.247 14:01:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:12.247 14:01:15 env -- env/env.sh@15 -- # uname 00:05:12.247 14:01:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:12.247 14:01:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:12.247 14:01:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.247 14:01:15 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:12.247 14:01:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.247 14:01:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.247 ************************************ 00:05:12.247 START TEST env_dpdk_post_init 00:05:12.247 ************************************ 00:05:12.247 14:01:15 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.247 EAL: Detected CPU lcores: 128 00:05:12.247 EAL: Detected NUMA nodes: 2 00:05:12.247 EAL: Detected shared linkage of DPDK 00:05:12.247 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.247 EAL: Selected IOVA mode 'VA' 00:05:12.247 EAL: VFIO support initialized 00:05:12.508 EAL: Using IOMMU type 1 (Type 1) 00:05:16.715 Starting DPDK initialization... 00:05:16.715 Starting SPDK post initialization... 00:05:16.715 SPDK NVMe probe 00:05:16.715 Attaching to 0000:65:00.0 00:05:16.715 Attached to 0000:65:00.0 00:05:16.715 Cleaning up... 00:05:18.100 00:05:18.100 real 0m5.836s 00:05:18.100 user 0m0.086s 00:05:18.100 sys 0m0.200s 00:05:18.100 14:01:21 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.100 14:01:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.100 ************************************ 00:05:18.100 END TEST env_dpdk_post_init 00:05:18.100 ************************************ 00:05:18.100 14:01:21 env -- env/env.sh@26 -- # uname 00:05:18.100 14:01:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:18.100 14:01:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:18.100 14:01:21 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.100 14:01:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.100 14:01:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.100 ************************************ 00:05:18.100 START TEST env_mem_callbacks 00:05:18.100 ************************************ 00:05:18.100 14:01:21 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:18.100 EAL: Detected CPU lcores: 128 00:05:18.100 EAL: Detected NUMA nodes: 2 00:05:18.100 EAL: Detected shared linkage of DPDK 00:05:18.100 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:18.100 EAL: Selected IOVA mode 'VA' 00:05:18.100 EAL: VFIO support initialized 00:05:18.361 00:05:18.361 00:05:18.361 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.361 http://cunit.sourceforge.net/ 00:05:18.361 00:05:18.361 00:05:18.361 Suite: memory 00:05:18.361 Test: test ... 00:05:18.361 register 0x200000200000 2097152 00:05:18.361 malloc 3145728 00:05:18.362 register 0x200000400000 4194304 00:05:18.362 buf 0x200000500000 len 3145728 PASSED 00:05:18.362 malloc 64 00:05:18.362 buf 0x2000004fff40 len 64 PASSED 00:05:18.362 malloc 4194304 00:05:18.362 register 0x200000800000 6291456 00:05:18.362 buf 0x200000a00000 len 4194304 PASSED 00:05:18.362 free 0x200000500000 3145728 00:05:18.362 free 0x2000004fff40 64 00:05:18.362 unregister 0x200000400000 4194304 PASSED 00:05:18.362 free 0x200000a00000 4194304 00:05:18.362 unregister 0x200000800000 6291456 PASSED 00:05:18.362 malloc 8388608 00:05:18.362 register 0x200000400000 10485760 00:05:18.362 buf 0x200000600000 len 8388608 PASSED 00:05:18.362 free 0x200000600000 8388608 00:05:18.362 unregister 0x200000400000 10485760 PASSED 00:05:18.362 passed 00:05:18.362 00:05:18.362 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.362 suites 1 1 n/a 0 0 00:05:18.362 tests 1 1 1 0 0 00:05:18.362 asserts 15 15 15 0 n/a 00:05:18.362 00:05:18.362 Elapsed time = 0.010 seconds 00:05:18.362 00:05:18.362 real 0m0.170s 00:05:18.362 user 0m0.021s 00:05:18.362 sys 0m0.048s 00:05:18.362 14:01:21 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.362 14:01:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:18.362 ************************************ 00:05:18.362 END TEST env_mem_callbacks 00:05:18.362 ************************************ 00:05:18.362 00:05:18.362 real 0m7.816s 00:05:18.362 user 0m1.000s 00:05:18.362 sys 0m1.068s 00:05:18.362 14:01:21 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.362 14:01:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.362 ************************************ 00:05:18.362 END TEST env 00:05:18.362 ************************************ 00:05:18.362 14:01:21 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:18.362 14:01:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.362 14:01:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.362 14:01:21 -- common/autotest_common.sh@10 -- # set +x 00:05:18.362 ************************************ 00:05:18.362 START TEST rpc 00:05:18.362 ************************************ 00:05:18.362 14:01:22 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:18.624 * Looking for test storage... 00:05:18.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:18.624 14:01:22 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:18.624 14:01:22 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:18.624 14:01:22 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:18.624 14:01:22 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:18.624 14:01:22 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.624 14:01:22 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.624 14:01:22 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.624 14:01:22 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.624 14:01:22 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.624 14:01:22 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.624 14:01:22 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.624 14:01:22 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.624 14:01:22 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.624 14:01:22 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.624 14:01:22 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.624 14:01:22 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:18.624 14:01:22 rpc -- scripts/common.sh@345 -- # : 1 00:05:18.624 14:01:22 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.624 14:01:22 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.624 14:01:22 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:18.624 14:01:22 rpc -- scripts/common.sh@353 -- # local d=1 00:05:18.624 14:01:22 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.624 14:01:22 rpc -- scripts/common.sh@355 -- # echo 1 00:05:18.624 14:01:22 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.624 14:01:22 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:18.624 14:01:22 rpc -- scripts/common.sh@353 -- # local d=2 00:05:18.624 14:01:22 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.624 14:01:22 rpc -- scripts/common.sh@355 -- # echo 2 00:05:18.624 14:01:22 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.624 14:01:22 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.624 14:01:22 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.624 14:01:22 rpc -- scripts/common.sh@368 -- # return 0 00:05:18.624 14:01:22 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.624 14:01:22 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:18.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.624 --rc genhtml_branch_coverage=1 00:05:18.624 --rc genhtml_function_coverage=1 00:05:18.624 --rc genhtml_legend=1 00:05:18.624 --rc geninfo_all_blocks=1 00:05:18.624 --rc geninfo_unexecuted_blocks=1 00:05:18.624 00:05:18.624 ' 00:05:18.624 14:01:22 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:18.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.624 --rc genhtml_branch_coverage=1 00:05:18.624 --rc genhtml_function_coverage=1 00:05:18.624 --rc genhtml_legend=1 00:05:18.624 --rc geninfo_all_blocks=1 00:05:18.624 --rc geninfo_unexecuted_blocks=1 00:05:18.624 00:05:18.624 ' 00:05:18.624 14:01:22 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:18.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.624 --rc genhtml_branch_coverage=1 00:05:18.624 --rc genhtml_function_coverage=1 00:05:18.624 --rc genhtml_legend=1 00:05:18.624 --rc geninfo_all_blocks=1 00:05:18.624 --rc geninfo_unexecuted_blocks=1 00:05:18.624 00:05:18.624 ' 00:05:18.624 14:01:22 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:18.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.624 --rc genhtml_branch_coverage=1 00:05:18.624 --rc genhtml_function_coverage=1 00:05:18.624 --rc genhtml_legend=1 00:05:18.624 --rc geninfo_all_blocks=1 00:05:18.624 --rc geninfo_unexecuted_blocks=1 00:05:18.624 00:05:18.624 ' 00:05:18.624 14:01:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1455598 00:05:18.624 14:01:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.624 14:01:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1455598 00:05:18.624 14:01:22 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:18.624 14:01:22 rpc -- common/autotest_common.sh@831 -- # '[' -z 1455598 ']' 00:05:18.624 14:01:22 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.624 14:01:22 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.624 14:01:22 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.624 14:01:22 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.624 14:01:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.624 [2024-10-13 14:01:22.277133] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:05:18.624 [2024-10-13 14:01:22.277208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1455598 ] 00:05:18.886 [2024-10-13 14:01:22.412862] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:18.886 [2024-10-13 14:01:22.461413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.886 [2024-10-13 14:01:22.489167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:18.886 [2024-10-13 14:01:22.489209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1455598' to capture a snapshot of events at runtime. 00:05:18.886 [2024-10-13 14:01:22.489217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:18.886 [2024-10-13 14:01:22.489224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:18.886 [2024-10-13 14:01:22.489230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1455598 for offline analysis/debug. 00:05:18.886 [2024-10-13 14:01:22.489984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.459 14:01:23 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.459 14:01:23 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:19.459 14:01:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:19.459 14:01:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:19.459 14:01:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:19.459 14:01:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:19.459 14:01:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.459 14:01:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.459 14:01:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.459 ************************************ 00:05:19.459 START TEST rpc_integrity 00:05:19.459 ************************************ 00:05:19.459 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:19.459 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:19.459 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.459 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.459 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.459 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:19.459 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:19.720 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:19.720 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:19.720 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.720 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.720 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.720 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:19.720 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:19.720 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.720 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.720 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.720 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:19.720 { 00:05:19.720 "name": "Malloc0", 00:05:19.720 "aliases": [ 00:05:19.720 "14f6bc41-d7ed-48eb-a043-b4ca444c669b" 00:05:19.720 ], 00:05:19.720 "product_name": "Malloc disk", 00:05:19.720 "block_size": 512, 00:05:19.720 "num_blocks": 16384, 00:05:19.720 "uuid": "14f6bc41-d7ed-48eb-a043-b4ca444c669b", 00:05:19.720 "assigned_rate_limits": { 00:05:19.720 "rw_ios_per_sec": 0, 00:05:19.720 "rw_mbytes_per_sec": 0, 00:05:19.720 "r_mbytes_per_sec": 0, 00:05:19.720 "w_mbytes_per_sec": 0 00:05:19.720 }, 00:05:19.720 "claimed": false, 00:05:19.720 "zoned": false, 00:05:19.720 "supported_io_types": { 00:05:19.720 "read": true, 00:05:19.720 "write": true, 00:05:19.720 "unmap": true, 00:05:19.720 "flush": true, 00:05:19.720 "reset": true, 00:05:19.720 "nvme_admin": false, 00:05:19.720 "nvme_io": false, 00:05:19.720 "nvme_io_md": false, 00:05:19.720 "write_zeroes": true, 00:05:19.720 "zcopy": true, 00:05:19.720 "get_zone_info": false, 00:05:19.720 "zone_management": false, 00:05:19.720 "zone_append": false, 00:05:19.720 "compare": false, 00:05:19.720 "compare_and_write": false, 00:05:19.720 "abort": true, 00:05:19.720 "seek_hole": false, 00:05:19.720 "seek_data": false, 00:05:19.720 "copy": true, 00:05:19.720 "nvme_iov_md": false 00:05:19.720 }, 00:05:19.720 "memory_domains": [ 00:05:19.720 { 00:05:19.720 "dma_device_id": "system", 00:05:19.720 "dma_device_type": 1 00:05:19.720 }, 00:05:19.720 { 00:05:19.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.720 "dma_device_type": 2 00:05:19.720 } 00:05:19.720 ], 00:05:19.720 "driver_specific": {} 00:05:19.720 } 00:05:19.720 ]' 00:05:19.720 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:19.720 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:19.720 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:19.720 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.720 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.720 [2024-10-13 14:01:23.252144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:19.720 [2024-10-13 14:01:23.252187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:19.720 [2024-10-13 14:01:23.252203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18cbe50 00:05:19.720 [2024-10-13 14:01:23.252211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:19.720 [2024-10-13 14:01:23.253759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:19.720 [2024-10-13 14:01:23.253796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:19.720 Passthru0 00:05:19.720 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.720 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:19.720 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.720 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.720 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.720 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:19.720 { 00:05:19.720 "name": "Malloc0", 00:05:19.720 "aliases": [ 00:05:19.720 "14f6bc41-d7ed-48eb-a043-b4ca444c669b" 00:05:19.720 ], 00:05:19.720 "product_name": "Malloc disk", 00:05:19.720 "block_size": 512, 00:05:19.720 "num_blocks": 16384, 00:05:19.720 "uuid": "14f6bc41-d7ed-48eb-a043-b4ca444c669b", 00:05:19.720 "assigned_rate_limits": { 00:05:19.720 "rw_ios_per_sec": 0, 00:05:19.720 "rw_mbytes_per_sec": 0, 00:05:19.720 "r_mbytes_per_sec": 0, 00:05:19.721 "w_mbytes_per_sec": 0 00:05:19.721 }, 00:05:19.721 "claimed": true, 00:05:19.721 "claim_type": "exclusive_write", 00:05:19.721 "zoned": false, 00:05:19.721 "supported_io_types": { 00:05:19.721 "read": true, 00:05:19.721 "write": true, 00:05:19.721 "unmap": true, 00:05:19.721 "flush": true, 00:05:19.721 "reset": true, 00:05:19.721 "nvme_admin": false, 00:05:19.721 "nvme_io": false, 00:05:19.721 "nvme_io_md": false, 00:05:19.721 "write_zeroes": true, 00:05:19.721 "zcopy": true, 00:05:19.721 "get_zone_info": false, 00:05:19.721 "zone_management": false, 00:05:19.721 "zone_append": false, 00:05:19.721 "compare": false, 00:05:19.721 "compare_and_write": false, 00:05:19.721 "abort": true, 00:05:19.721 "seek_hole": false, 00:05:19.721 "seek_data": false, 00:05:19.721 "copy": true, 00:05:19.721 "nvme_iov_md": false 00:05:19.721 }, 00:05:19.721 "memory_domains": [ 00:05:19.721 { 00:05:19.721 "dma_device_id": "system", 00:05:19.721 "dma_device_type": 1 00:05:19.721 }, 00:05:19.721 { 00:05:19.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.721 "dma_device_type": 2 00:05:19.721 } 00:05:19.721 ], 00:05:19.721 "driver_specific": {} 00:05:19.721 }, 00:05:19.721 { 00:05:19.721 "name": "Passthru0", 00:05:19.721 "aliases": [ 00:05:19.721 "a313df62-140c-5c44-a6d3-bfd826a22b04" 00:05:19.721 ], 00:05:19.721 "product_name": "passthru", 00:05:19.721 "block_size": 512, 00:05:19.721 "num_blocks": 16384, 00:05:19.721 "uuid": "a313df62-140c-5c44-a6d3-bfd826a22b04", 00:05:19.721 "assigned_rate_limits": { 00:05:19.721 "rw_ios_per_sec": 0, 00:05:19.721 "rw_mbytes_per_sec": 0, 00:05:19.721 "r_mbytes_per_sec": 0, 00:05:19.721 "w_mbytes_per_sec": 0 00:05:19.721 }, 00:05:19.721 "claimed": false, 00:05:19.721 "zoned": false, 00:05:19.721 "supported_io_types": { 00:05:19.721 "read": true, 00:05:19.721 "write": true, 00:05:19.721 "unmap": true, 00:05:19.721 "flush": true, 00:05:19.721 "reset": true, 00:05:19.721 "nvme_admin": false, 00:05:19.721 "nvme_io": false, 00:05:19.721 "nvme_io_md": false, 00:05:19.721 "write_zeroes": true, 00:05:19.721 "zcopy": true, 00:05:19.721 "get_zone_info": false, 00:05:19.721 "zone_management": false, 00:05:19.721 "zone_append": false, 00:05:19.721 "compare": false, 00:05:19.721 "compare_and_write": false, 00:05:19.721 "abort": true, 00:05:19.721 "seek_hole": false, 00:05:19.721 "seek_data": false, 00:05:19.721 "copy": true, 00:05:19.721 "nvme_iov_md": false 00:05:19.721 }, 00:05:19.721 "memory_domains": [ 00:05:19.721 { 00:05:19.721 "dma_device_id": "system", 00:05:19.721 "dma_device_type": 1 00:05:19.721 }, 00:05:19.721 { 00:05:19.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.721 "dma_device_type": 2 00:05:19.721 } 00:05:19.721 ], 00:05:19.721 "driver_specific": { 00:05:19.721 "passthru": { 00:05:19.721 "name": "Passthru0", 00:05:19.721 "base_bdev_name": "Malloc0" 00:05:19.721 } 00:05:19.721 } 00:05:19.721 } 00:05:19.721 ]' 00:05:19.721 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:19.721 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:19.721 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:19.721 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.721 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.721 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.721 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:19.721 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.721 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.721 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.721 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:19.721 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.721 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.721 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.721 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:19.721 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:19.721 14:01:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:19.721 00:05:19.721 real 0m0.306s 00:05:19.721 user 0m0.181s 00:05:19.721 sys 0m0.050s 00:05:19.721 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.721 14:01:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.721 ************************************ 00:05:19.721 END TEST rpc_integrity 00:05:19.721 ************************************ 00:05:19.982 14:01:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:19.982 14:01:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.982 14:01:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.982 14:01:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.982 ************************************ 00:05:19.982 START TEST rpc_plugins 00:05:19.982 ************************************ 00:05:19.982 14:01:23 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:19.982 14:01:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:19.982 14:01:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.982 14:01:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.982 14:01:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.982 14:01:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:19.982 14:01:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:19.982 14:01:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.982 14:01:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.982 14:01:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.982 14:01:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:19.982 { 00:05:19.982 "name": "Malloc1", 00:05:19.982 "aliases": [ 00:05:19.982 "e2c92d41-debf-42f1-861d-04537e056d01" 00:05:19.982 ], 00:05:19.982 "product_name": "Malloc disk", 00:05:19.982 "block_size": 4096, 00:05:19.982 "num_blocks": 256, 00:05:19.982 "uuid": "e2c92d41-debf-42f1-861d-04537e056d01", 00:05:19.982 "assigned_rate_limits": { 00:05:19.982 "rw_ios_per_sec": 0, 00:05:19.982 "rw_mbytes_per_sec": 0, 00:05:19.982 "r_mbytes_per_sec": 0, 00:05:19.982 "w_mbytes_per_sec": 0 00:05:19.982 }, 00:05:19.982 "claimed": false, 00:05:19.982 "zoned": false, 00:05:19.982 "supported_io_types": { 00:05:19.982 "read": true, 00:05:19.982 "write": true, 00:05:19.982 "unmap": true, 00:05:19.982 "flush": true, 00:05:19.982 "reset": true, 00:05:19.982 "nvme_admin": false, 00:05:19.982 "nvme_io": false, 00:05:19.982 "nvme_io_md": false, 00:05:19.982 "write_zeroes": true, 00:05:19.982 "zcopy": true, 00:05:19.982 "get_zone_info": false, 00:05:19.982 "zone_management": false, 00:05:19.982 "zone_append": false, 00:05:19.982 "compare": false, 00:05:19.982 "compare_and_write": false, 00:05:19.982 "abort": true, 00:05:19.982 "seek_hole": false, 00:05:19.982 "seek_data": false, 00:05:19.982 "copy": true, 00:05:19.982 "nvme_iov_md": false 00:05:19.982 }, 00:05:19.982 "memory_domains": [ 00:05:19.982 { 00:05:19.982 "dma_device_id": "system", 00:05:19.982 "dma_device_type": 1 00:05:19.982 }, 00:05:19.982 { 00:05:19.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.982 "dma_device_type": 2 00:05:19.982 } 00:05:19.982 ], 00:05:19.982 "driver_specific": {} 00:05:19.982 } 00:05:19.982 ]' 00:05:19.982 14:01:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:19.982 14:01:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:19.982 14:01:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:19.982 14:01:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.982 14:01:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.982 14:01:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.982 14:01:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:19.982 14:01:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.982 14:01:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.982 14:01:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.982 14:01:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:19.982 14:01:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:19.982 14:01:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:19.982 00:05:19.982 real 0m0.155s 00:05:19.982 user 0m0.101s 00:05:19.982 sys 0m0.018s 00:05:19.982 14:01:23 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.982 14:01:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.982 ************************************ 00:05:19.982 END TEST rpc_plugins 00:05:19.982 ************************************ 00:05:20.244 14:01:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:20.244 14:01:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.244 14:01:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.244 14:01:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.244 ************************************ 00:05:20.244 START TEST rpc_trace_cmd_test 00:05:20.244 ************************************ 00:05:20.244 14:01:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:20.244 14:01:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:20.244 14:01:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:20.244 14:01:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.244 14:01:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.244 14:01:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.244 14:01:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:20.244 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1455598", 00:05:20.244 "tpoint_group_mask": "0x8", 00:05:20.244 "iscsi_conn": { 00:05:20.244 "mask": "0x2", 00:05:20.244 "tpoint_mask": "0x0" 00:05:20.244 }, 00:05:20.244 "scsi": { 00:05:20.244 "mask": "0x4", 00:05:20.244 "tpoint_mask": "0x0" 00:05:20.244 }, 00:05:20.244 "bdev": { 00:05:20.244 "mask": "0x8", 00:05:20.244 "tpoint_mask": "0xffffffffffffffff" 00:05:20.244 }, 00:05:20.244 "nvmf_rdma": { 00:05:20.244 "mask": "0x10", 00:05:20.244 "tpoint_mask": "0x0" 00:05:20.244 }, 00:05:20.244 "nvmf_tcp": { 00:05:20.244 "mask": "0x20", 00:05:20.244 "tpoint_mask": "0x0" 00:05:20.244 }, 00:05:20.244 "ftl": { 00:05:20.244 "mask": "0x40", 00:05:20.244 "tpoint_mask": "0x0" 00:05:20.244 }, 00:05:20.244 "blobfs": { 00:05:20.244 "mask": "0x80", 00:05:20.244 "tpoint_mask": "0x0" 00:05:20.244 }, 00:05:20.244 "dsa": { 00:05:20.244 "mask": "0x200", 00:05:20.244 "tpoint_mask": "0x0" 00:05:20.244 }, 00:05:20.244 "thread": { 00:05:20.244 "mask": "0x400", 00:05:20.244 "tpoint_mask": "0x0" 00:05:20.244 }, 00:05:20.244 "nvme_pcie": { 00:05:20.244 "mask": "0x800", 00:05:20.244 "tpoint_mask": "0x0" 00:05:20.244 }, 00:05:20.244 "iaa": { 00:05:20.244 "mask": "0x1000", 00:05:20.244 "tpoint_mask": "0x0" 00:05:20.244 }, 00:05:20.244 "nvme_tcp": { 00:05:20.244 "mask": "0x2000", 00:05:20.244 "tpoint_mask": "0x0" 00:05:20.244 }, 00:05:20.244 "bdev_nvme": { 00:05:20.244 "mask": "0x4000", 00:05:20.244 "tpoint_mask": "0x0" 00:05:20.244 }, 00:05:20.244 "sock": { 00:05:20.244 "mask": "0x8000", 00:05:20.244 "tpoint_mask": "0x0" 00:05:20.244 }, 00:05:20.244 "blob": { 00:05:20.244 "mask": "0x10000", 00:05:20.244 "tpoint_mask": "0x0" 00:05:20.244 }, 00:05:20.244 "bdev_raid": { 00:05:20.244 "mask": "0x20000", 00:05:20.244 "tpoint_mask": "0x0" 00:05:20.244 }, 00:05:20.244 "scheduler": { 00:05:20.244 "mask": "0x40000", 00:05:20.244 "tpoint_mask": "0x0" 00:05:20.244 } 00:05:20.244 }' 00:05:20.244 14:01:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:20.244 14:01:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:20.244 14:01:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:20.244 14:01:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:20.244 14:01:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:20.244 14:01:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:20.244 14:01:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:20.244 14:01:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:20.244 14:01:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:20.505 14:01:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:20.505 00:05:20.505 real 0m0.253s 00:05:20.505 user 0m0.206s 00:05:20.505 sys 0m0.037s 00:05:20.505 14:01:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.505 14:01:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:20.505 ************************************ 00:05:20.505 END TEST rpc_trace_cmd_test 00:05:20.505 ************************************ 00:05:20.505 14:01:24 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:20.505 14:01:24 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:20.505 14:01:24 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:20.505 14:01:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.505 14:01:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.505 14:01:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.505 ************************************ 00:05:20.505 START TEST rpc_daemon_integrity 00:05:20.505 ************************************ 00:05:20.505 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:20.505 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:20.505 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.505 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.505 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.505 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:20.505 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:20.505 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:20.505 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:20.506 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.506 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.506 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.506 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:20.506 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:20.506 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.506 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.506 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.506 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:20.506 { 00:05:20.506 "name": "Malloc2", 00:05:20.506 "aliases": [ 00:05:20.506 "e1ca8c73-3684-4daf-a9cb-8835c8a59fad" 00:05:20.506 ], 00:05:20.506 "product_name": "Malloc disk", 00:05:20.506 "block_size": 512, 00:05:20.506 "num_blocks": 16384, 00:05:20.506 "uuid": "e1ca8c73-3684-4daf-a9cb-8835c8a59fad", 00:05:20.506 "assigned_rate_limits": { 00:05:20.506 "rw_ios_per_sec": 0, 00:05:20.506 "rw_mbytes_per_sec": 0, 00:05:20.506 "r_mbytes_per_sec": 0, 00:05:20.506 "w_mbytes_per_sec": 0 00:05:20.506 }, 00:05:20.506 "claimed": false, 00:05:20.506 "zoned": false, 00:05:20.506 "supported_io_types": { 00:05:20.506 "read": true, 00:05:20.506 "write": true, 00:05:20.506 "unmap": true, 00:05:20.506 "flush": true, 00:05:20.506 "reset": true, 00:05:20.506 "nvme_admin": false, 00:05:20.506 "nvme_io": false, 00:05:20.506 "nvme_io_md": false, 00:05:20.506 "write_zeroes": true, 00:05:20.506 "zcopy": true, 00:05:20.506 "get_zone_info": false, 00:05:20.506 "zone_management": false, 00:05:20.506 "zone_append": false, 00:05:20.506 "compare": false, 00:05:20.506 "compare_and_write": false, 00:05:20.506 "abort": true, 00:05:20.506 "seek_hole": false, 00:05:20.506 "seek_data": false, 00:05:20.506 "copy": true, 00:05:20.506 "nvme_iov_md": false 00:05:20.506 }, 00:05:20.506 "memory_domains": [ 00:05:20.506 { 00:05:20.506 "dma_device_id": "system", 00:05:20.506 "dma_device_type": 1 00:05:20.506 }, 00:05:20.506 { 00:05:20.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.506 "dma_device_type": 2 00:05:20.506 } 00:05:20.506 ], 00:05:20.506 "driver_specific": {} 00:05:20.506 } 00:05:20.506 ]' 00:05:20.506 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:20.506 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:20.506 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:20.506 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.506 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.506 [2024-10-13 14:01:24.212513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:20.506 [2024-10-13 14:01:24.212557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:20.506 [2024-10-13 14:01:24.212575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18cf480 00:05:20.506 [2024-10-13 14:01:24.212583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:20.767 [2024-10-13 14:01:24.214036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:20.767 [2024-10-13 14:01:24.214086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:20.767 Passthru0 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:20.767 { 00:05:20.767 "name": "Malloc2", 00:05:20.767 "aliases": [ 00:05:20.767 "e1ca8c73-3684-4daf-a9cb-8835c8a59fad" 00:05:20.767 ], 00:05:20.767 "product_name": "Malloc disk", 00:05:20.767 "block_size": 512, 00:05:20.767 "num_blocks": 16384, 00:05:20.767 "uuid": "e1ca8c73-3684-4daf-a9cb-8835c8a59fad", 00:05:20.767 "assigned_rate_limits": { 00:05:20.767 "rw_ios_per_sec": 0, 00:05:20.767 "rw_mbytes_per_sec": 0, 00:05:20.767 "r_mbytes_per_sec": 0, 00:05:20.767 "w_mbytes_per_sec": 0 00:05:20.767 }, 00:05:20.767 "claimed": true, 00:05:20.767 "claim_type": "exclusive_write", 00:05:20.767 "zoned": false, 00:05:20.767 "supported_io_types": { 00:05:20.767 "read": true, 00:05:20.767 "write": true, 00:05:20.767 "unmap": true, 00:05:20.767 "flush": true, 00:05:20.767 "reset": true, 00:05:20.767 "nvme_admin": false, 00:05:20.767 "nvme_io": false, 00:05:20.767 "nvme_io_md": false, 00:05:20.767 "write_zeroes": true, 00:05:20.767 "zcopy": true, 00:05:20.767 "get_zone_info": false, 00:05:20.767 "zone_management": false, 00:05:20.767 "zone_append": false, 00:05:20.767 "compare": false, 00:05:20.767 "compare_and_write": false, 00:05:20.767 "abort": true, 00:05:20.767 "seek_hole": false, 00:05:20.767 "seek_data": false, 00:05:20.767 "copy": true, 00:05:20.767 "nvme_iov_md": false 00:05:20.767 }, 00:05:20.767 "memory_domains": [ 00:05:20.767 { 00:05:20.767 "dma_device_id": "system", 00:05:20.767 "dma_device_type": 1 00:05:20.767 }, 00:05:20.767 { 00:05:20.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.767 "dma_device_type": 2 00:05:20.767 } 00:05:20.767 ], 00:05:20.767 "driver_specific": {} 00:05:20.767 }, 00:05:20.767 { 00:05:20.767 "name": "Passthru0", 00:05:20.767 "aliases": [ 00:05:20.767 "4fdcea34-d870-5cf5-b012-d8b393b1c282" 00:05:20.767 ], 00:05:20.767 "product_name": "passthru", 00:05:20.767 "block_size": 512, 00:05:20.767 "num_blocks": 16384, 00:05:20.767 "uuid": "4fdcea34-d870-5cf5-b012-d8b393b1c282", 00:05:20.767 "assigned_rate_limits": { 00:05:20.767 "rw_ios_per_sec": 0, 00:05:20.767 "rw_mbytes_per_sec": 0, 00:05:20.767 "r_mbytes_per_sec": 0, 00:05:20.767 "w_mbytes_per_sec": 0 00:05:20.767 }, 00:05:20.767 "claimed": false, 00:05:20.767 "zoned": false, 00:05:20.767 "supported_io_types": { 00:05:20.767 "read": true, 00:05:20.767 "write": true, 00:05:20.767 "unmap": true, 00:05:20.767 "flush": true, 00:05:20.767 "reset": true, 00:05:20.767 "nvme_admin": false, 00:05:20.767 "nvme_io": false, 00:05:20.767 "nvme_io_md": false, 00:05:20.767 "write_zeroes": true, 00:05:20.767 "zcopy": true, 00:05:20.767 "get_zone_info": false, 00:05:20.767 "zone_management": false, 00:05:20.767 "zone_append": false, 00:05:20.767 "compare": false, 00:05:20.767 "compare_and_write": false, 00:05:20.767 "abort": true, 00:05:20.767 "seek_hole": false, 00:05:20.767 "seek_data": false, 00:05:20.767 "copy": true, 00:05:20.767 "nvme_iov_md": false 00:05:20.767 }, 00:05:20.767 "memory_domains": [ 00:05:20.767 { 00:05:20.767 "dma_device_id": "system", 00:05:20.767 "dma_device_type": 1 00:05:20.767 }, 00:05:20.767 { 00:05:20.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:20.767 "dma_device_type": 2 00:05:20.767 } 00:05:20.767 ], 00:05:20.767 "driver_specific": { 00:05:20.767 "passthru": { 00:05:20.767 "name": "Passthru0", 00:05:20.767 "base_bdev_name": "Malloc2" 00:05:20.767 } 00:05:20.767 } 00:05:20.767 } 00:05:20.767 ]' 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:20.767 00:05:20.767 real 0m0.304s 00:05:20.767 user 0m0.191s 00:05:20.767 sys 0m0.047s 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.767 14:01:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:20.767 ************************************ 00:05:20.768 END TEST rpc_daemon_integrity 00:05:20.768 ************************************ 00:05:20.768 14:01:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:20.768 14:01:24 rpc -- rpc/rpc.sh@84 -- # killprocess 1455598 00:05:20.768 14:01:24 rpc -- common/autotest_common.sh@950 -- # '[' -z 1455598 ']' 00:05:20.768 14:01:24 rpc -- common/autotest_common.sh@954 -- # kill -0 1455598 00:05:20.768 14:01:24 rpc -- common/autotest_common.sh@955 -- # uname 00:05:20.768 14:01:24 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.768 14:01:24 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1455598 00:05:21.028 14:01:24 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:21.028 14:01:24 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:21.028 14:01:24 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1455598' 00:05:21.028 killing process with pid 1455598 00:05:21.028 14:01:24 rpc -- common/autotest_common.sh@969 -- # kill 1455598 00:05:21.028 14:01:24 rpc -- common/autotest_common.sh@974 -- # wait 1455598 00:05:21.028 00:05:21.028 real 0m2.709s 00:05:21.028 user 0m3.338s 00:05:21.028 sys 0m0.858s 00:05:21.028 14:01:24 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.028 14:01:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.028 ************************************ 00:05:21.028 END TEST rpc 00:05:21.028 ************************************ 00:05:21.289 14:01:24 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:21.289 14:01:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.289 14:01:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.289 14:01:24 -- common/autotest_common.sh@10 -- # set +x 00:05:21.289 ************************************ 00:05:21.289 START TEST skip_rpc 00:05:21.289 ************************************ 00:05:21.289 14:01:24 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:21.289 * Looking for test storage... 00:05:21.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:21.289 14:01:24 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:21.289 14:01:24 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:21.289 14:01:24 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:21.289 14:01:24 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.289 14:01:24 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:21.289 14:01:24 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.289 14:01:24 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:21.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.289 --rc genhtml_branch_coverage=1 00:05:21.289 --rc genhtml_function_coverage=1 00:05:21.289 --rc genhtml_legend=1 00:05:21.289 --rc geninfo_all_blocks=1 00:05:21.289 --rc geninfo_unexecuted_blocks=1 00:05:21.289 00:05:21.289 ' 00:05:21.289 14:01:24 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:21.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.289 --rc genhtml_branch_coverage=1 00:05:21.289 --rc genhtml_function_coverage=1 00:05:21.289 --rc genhtml_legend=1 00:05:21.289 --rc geninfo_all_blocks=1 00:05:21.289 --rc geninfo_unexecuted_blocks=1 00:05:21.289 00:05:21.289 ' 00:05:21.289 14:01:24 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:21.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.289 --rc genhtml_branch_coverage=1 00:05:21.289 --rc genhtml_function_coverage=1 00:05:21.289 --rc genhtml_legend=1 00:05:21.289 --rc geninfo_all_blocks=1 00:05:21.289 --rc geninfo_unexecuted_blocks=1 00:05:21.289 00:05:21.289 ' 00:05:21.289 14:01:24 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:21.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.289 --rc genhtml_branch_coverage=1 00:05:21.289 --rc genhtml_function_coverage=1 00:05:21.289 --rc genhtml_legend=1 00:05:21.289 --rc geninfo_all_blocks=1 00:05:21.289 --rc geninfo_unexecuted_blocks=1 00:05:21.289 00:05:21.289 ' 00:05:21.289 14:01:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:21.289 14:01:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:21.289 14:01:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:21.289 14:01:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.289 14:01:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.289 14:01:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.550 ************************************ 00:05:21.550 START TEST skip_rpc 00:05:21.550 ************************************ 00:05:21.550 14:01:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:21.550 14:01:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1456311 00:05:21.550 14:01:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.550 14:01:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:21.550 14:01:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:21.550 [2024-10-13 14:01:25.094211] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:05:21.550 [2024-10-13 14:01:25.094271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1456311 ] 00:05:21.550 [2024-10-13 14:01:25.228944] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:21.811 [2024-10-13 14:01:25.279604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.811 [2024-10-13 14:01:25.307856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1456311 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1456311 ']' 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1456311 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1456311 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1456311' 00:05:27.191 killing process with pid 1456311 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1456311 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1456311 00:05:27.191 00:05:27.191 real 0m5.261s 00:05:27.191 user 0m4.927s 00:05:27.191 sys 0m0.284s 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.191 14:01:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.191 ************************************ 00:05:27.191 END TEST skip_rpc 00:05:27.191 ************************************ 00:05:27.191 14:01:30 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:27.191 14:01:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.191 14:01:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.191 14:01:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.191 ************************************ 00:05:27.191 START TEST skip_rpc_with_json 00:05:27.191 ************************************ 00:05:27.191 14:01:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:27.191 14:01:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:27.191 14:01:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1457344 00:05:27.191 14:01:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.191 14:01:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1457344 00:05:27.191 14:01:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.191 14:01:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1457344 ']' 00:05:27.191 14:01:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.191 14:01:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.191 14:01:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.191 14:01:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.191 14:01:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.191 [2024-10-13 14:01:30.430842] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:05:27.191 [2024-10-13 14:01:30.430908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1457344 ] 00:05:27.191 [2024-10-13 14:01:30.564303] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:27.191 [2024-10-13 14:01:30.612037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.191 [2024-10-13 14:01:30.629428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.817 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.817 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:27.817 14:01:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:27.817 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.817 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.817 [2024-10-13 14:01:31.206094] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:27.817 request: 00:05:27.817 { 00:05:27.817 "trtype": "tcp", 00:05:27.817 "method": "nvmf_get_transports", 00:05:27.817 "req_id": 1 00:05:27.817 } 00:05:27.817 Got JSON-RPC error response 00:05:27.817 response: 00:05:27.817 { 00:05:27.817 "code": -19, 00:05:27.817 "message": "No such device" 00:05:27.817 } 00:05:27.817 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:27.817 14:01:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:27.817 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.817 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.817 [2024-10-13 14:01:31.218164] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:27.817 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.817 14:01:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:27.817 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.817 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.817 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.817 14:01:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:27.817 { 00:05:27.817 "subsystems": [ 00:05:27.817 { 00:05:27.817 "subsystem": "fsdev", 00:05:27.817 "config": [ 00:05:27.817 { 00:05:27.817 "method": "fsdev_set_opts", 00:05:27.817 "params": { 00:05:27.817 "fsdev_io_pool_size": 65535, 00:05:27.817 "fsdev_io_cache_size": 256 00:05:27.817 } 00:05:27.817 } 00:05:27.817 ] 00:05:27.817 }, 00:05:27.817 { 00:05:27.817 "subsystem": "vfio_user_target", 00:05:27.817 "config": null 00:05:27.817 }, 00:05:27.817 { 00:05:27.817 "subsystem": "keyring", 00:05:27.817 "config": [] 00:05:27.817 }, 00:05:27.817 { 00:05:27.817 "subsystem": "iobuf", 00:05:27.817 "config": [ 00:05:27.817 { 00:05:27.817 "method": "iobuf_set_options", 00:05:27.817 "params": { 00:05:27.817 "small_pool_count": 8192, 00:05:27.817 "large_pool_count": 1024, 00:05:27.817 "small_bufsize": 8192, 00:05:27.817 "large_bufsize": 135168 00:05:27.817 } 00:05:27.817 } 00:05:27.817 ] 00:05:27.817 }, 00:05:27.817 { 00:05:27.817 "subsystem": "sock", 00:05:27.817 "config": [ 00:05:27.817 { 00:05:27.817 "method": "sock_set_default_impl", 00:05:27.817 "params": { 00:05:27.817 "impl_name": "posix" 00:05:27.817 } 00:05:27.817 }, 00:05:27.817 { 00:05:27.817 "method": "sock_impl_set_options", 00:05:27.817 "params": { 00:05:27.817 "impl_name": "ssl", 00:05:27.817 "recv_buf_size": 4096, 00:05:27.817 "send_buf_size": 4096, 00:05:27.817 "enable_recv_pipe": true, 00:05:27.817 "enable_quickack": false, 00:05:27.817 "enable_placement_id": 0, 00:05:27.817 "enable_zerocopy_send_server": true, 00:05:27.817 "enable_zerocopy_send_client": false, 00:05:27.817 "zerocopy_threshold": 0, 00:05:27.817 "tls_version": 0, 00:05:27.817 "enable_ktls": false 00:05:27.818 } 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "method": "sock_impl_set_options", 00:05:27.818 "params": { 00:05:27.818 "impl_name": "posix", 00:05:27.818 "recv_buf_size": 2097152, 00:05:27.818 "send_buf_size": 2097152, 00:05:27.818 "enable_recv_pipe": true, 00:05:27.818 "enable_quickack": false, 00:05:27.818 "enable_placement_id": 0, 00:05:27.818 "enable_zerocopy_send_server": true, 00:05:27.818 "enable_zerocopy_send_client": false, 00:05:27.818 "zerocopy_threshold": 0, 00:05:27.818 "tls_version": 0, 00:05:27.818 "enable_ktls": false 00:05:27.818 } 00:05:27.818 } 00:05:27.818 ] 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "subsystem": "vmd", 00:05:27.818 "config": [] 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "subsystem": "accel", 00:05:27.818 "config": [ 00:05:27.818 { 00:05:27.818 "method": "accel_set_options", 00:05:27.818 "params": { 00:05:27.818 "small_cache_size": 128, 00:05:27.818 "large_cache_size": 16, 00:05:27.818 "task_count": 2048, 00:05:27.818 "sequence_count": 2048, 00:05:27.818 "buf_count": 2048 00:05:27.818 } 00:05:27.818 } 00:05:27.818 ] 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "subsystem": "bdev", 00:05:27.818 "config": [ 00:05:27.818 { 00:05:27.818 "method": "bdev_set_options", 00:05:27.818 "params": { 00:05:27.818 "bdev_io_pool_size": 65535, 00:05:27.818 "bdev_io_cache_size": 256, 00:05:27.818 "bdev_auto_examine": true, 00:05:27.818 "iobuf_small_cache_size": 128, 00:05:27.818 "iobuf_large_cache_size": 16 00:05:27.818 } 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "method": "bdev_raid_set_options", 00:05:27.818 "params": { 00:05:27.818 "process_window_size_kb": 1024, 00:05:27.818 "process_max_bandwidth_mb_sec": 0 00:05:27.818 } 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "method": "bdev_iscsi_set_options", 00:05:27.818 "params": { 00:05:27.818 "timeout_sec": 30 00:05:27.818 } 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "method": "bdev_nvme_set_options", 00:05:27.818 "params": { 00:05:27.818 "action_on_timeout": "none", 00:05:27.818 "timeout_us": 0, 00:05:27.818 "timeout_admin_us": 0, 00:05:27.818 "keep_alive_timeout_ms": 10000, 00:05:27.818 "arbitration_burst": 0, 00:05:27.818 "low_priority_weight": 0, 00:05:27.818 "medium_priority_weight": 0, 00:05:27.818 "high_priority_weight": 0, 00:05:27.818 "nvme_adminq_poll_period_us": 10000, 00:05:27.818 "nvme_ioq_poll_period_us": 0, 00:05:27.818 "io_queue_requests": 0, 00:05:27.818 "delay_cmd_submit": true, 00:05:27.818 "transport_retry_count": 4, 00:05:27.818 "bdev_retry_count": 3, 00:05:27.818 "transport_ack_timeout": 0, 00:05:27.818 "ctrlr_loss_timeout_sec": 0, 00:05:27.818 "reconnect_delay_sec": 0, 00:05:27.818 "fast_io_fail_timeout_sec": 0, 00:05:27.818 "disable_auto_failback": false, 00:05:27.818 "generate_uuids": false, 00:05:27.818 "transport_tos": 0, 00:05:27.818 "nvme_error_stat": false, 00:05:27.818 "rdma_srq_size": 0, 00:05:27.818 "io_path_stat": false, 00:05:27.818 "allow_accel_sequence": false, 00:05:27.818 "rdma_max_cq_size": 0, 00:05:27.818 "rdma_cm_event_timeout_ms": 0, 00:05:27.818 "dhchap_digests": [ 00:05:27.818 "sha256", 00:05:27.818 "sha384", 00:05:27.818 "sha512" 00:05:27.818 ], 00:05:27.818 "dhchap_dhgroups": [ 00:05:27.818 "null", 00:05:27.818 "ffdhe2048", 00:05:27.818 "ffdhe3072", 00:05:27.818 "ffdhe4096", 00:05:27.818 "ffdhe6144", 00:05:27.818 "ffdhe8192" 00:05:27.818 ] 00:05:27.818 } 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "method": "bdev_nvme_set_hotplug", 00:05:27.818 "params": { 00:05:27.818 "period_us": 100000, 00:05:27.818 "enable": false 00:05:27.818 } 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "method": "bdev_wait_for_examine" 00:05:27.818 } 00:05:27.818 ] 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "subsystem": "scsi", 00:05:27.818 "config": null 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "subsystem": "scheduler", 00:05:27.818 "config": [ 00:05:27.818 { 00:05:27.818 "method": "framework_set_scheduler", 00:05:27.818 "params": { 00:05:27.818 "name": "static" 00:05:27.818 } 00:05:27.818 } 00:05:27.818 ] 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "subsystem": "vhost_scsi", 00:05:27.818 "config": [] 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "subsystem": "vhost_blk", 00:05:27.818 "config": [] 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "subsystem": "ublk", 00:05:27.818 "config": [] 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "subsystem": "nbd", 00:05:27.818 "config": [] 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "subsystem": "nvmf", 00:05:27.818 "config": [ 00:05:27.818 { 00:05:27.818 "method": "nvmf_set_config", 00:05:27.818 "params": { 00:05:27.818 "discovery_filter": "match_any", 00:05:27.818 "admin_cmd_passthru": { 00:05:27.818 "identify_ctrlr": false 00:05:27.818 }, 00:05:27.818 "dhchap_digests": [ 00:05:27.818 "sha256", 00:05:27.818 "sha384", 00:05:27.818 "sha512" 00:05:27.818 ], 00:05:27.818 "dhchap_dhgroups": [ 00:05:27.818 "null", 00:05:27.818 "ffdhe2048", 00:05:27.818 "ffdhe3072", 00:05:27.818 "ffdhe4096", 00:05:27.818 "ffdhe6144", 00:05:27.818 "ffdhe8192" 00:05:27.818 ] 00:05:27.818 } 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "method": "nvmf_set_max_subsystems", 00:05:27.818 "params": { 00:05:27.818 "max_subsystems": 1024 00:05:27.818 } 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "method": "nvmf_set_crdt", 00:05:27.818 "params": { 00:05:27.818 "crdt1": 0, 00:05:27.818 "crdt2": 0, 00:05:27.818 "crdt3": 0 00:05:27.818 } 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "method": "nvmf_create_transport", 00:05:27.818 "params": { 00:05:27.818 "trtype": "TCP", 00:05:27.818 "max_queue_depth": 128, 00:05:27.818 "max_io_qpairs_per_ctrlr": 127, 00:05:27.818 "in_capsule_data_size": 4096, 00:05:27.818 "max_io_size": 131072, 00:05:27.818 "io_unit_size": 131072, 00:05:27.818 "max_aq_depth": 128, 00:05:27.818 "num_shared_buffers": 511, 00:05:27.818 "buf_cache_size": 4294967295, 00:05:27.818 "dif_insert_or_strip": false, 00:05:27.818 "zcopy": false, 00:05:27.818 "c2h_success": true, 00:05:27.818 "sock_priority": 0, 00:05:27.818 "abort_timeout_sec": 1, 00:05:27.818 "ack_timeout": 0, 00:05:27.818 "data_wr_pool_size": 0 00:05:27.818 } 00:05:27.818 } 00:05:27.818 ] 00:05:27.818 }, 00:05:27.818 { 00:05:27.818 "subsystem": "iscsi", 00:05:27.818 "config": [ 00:05:27.818 { 00:05:27.818 "method": "iscsi_set_options", 00:05:27.818 "params": { 00:05:27.818 "node_base": "iqn.2016-06.io.spdk", 00:05:27.818 "max_sessions": 128, 00:05:27.818 "max_connections_per_session": 2, 00:05:27.818 "max_queue_depth": 64, 00:05:27.818 "default_time2wait": 2, 00:05:27.818 "default_time2retain": 20, 00:05:27.818 "first_burst_length": 8192, 00:05:27.818 "immediate_data": true, 00:05:27.818 "allow_duplicated_isid": false, 00:05:27.818 "error_recovery_level": 0, 00:05:27.818 "nop_timeout": 60, 00:05:27.818 "nop_in_interval": 30, 00:05:27.818 "disable_chap": false, 00:05:27.818 "require_chap": false, 00:05:27.818 "mutual_chap": false, 00:05:27.818 "chap_group": 0, 00:05:27.818 "max_large_datain_per_connection": 64, 00:05:27.818 "max_r2t_per_connection": 4, 00:05:27.818 "pdu_pool_size": 36864, 00:05:27.818 "immediate_data_pool_size": 16384, 00:05:27.818 "data_out_pool_size": 2048 00:05:27.818 } 00:05:27.818 } 00:05:27.818 ] 00:05:27.818 } 00:05:27.818 ] 00:05:27.818 } 00:05:27.818 14:01:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:27.818 14:01:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1457344 00:05:27.818 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1457344 ']' 00:05:27.818 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1457344 00:05:27.818 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:27.818 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.818 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1457344 00:05:27.818 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.818 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.818 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1457344' 00:05:27.818 killing process with pid 1457344 00:05:27.818 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1457344 00:05:27.818 14:01:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1457344 00:05:28.080 14:01:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1457696 00:05:28.080 14:01:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:28.080 14:01:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1457696 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1457696 ']' 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1457696 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1457696 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1457696' 00:05:33.367 killing process with pid 1457696 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1457696 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1457696 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:33.367 00:05:33.367 real 0m6.531s 00:05:33.367 user 0m6.222s 00:05:33.367 sys 0m0.573s 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.367 ************************************ 00:05:33.367 END TEST skip_rpc_with_json 00:05:33.367 ************************************ 00:05:33.367 14:01:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:33.367 14:01:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.367 14:01:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.367 14:01:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.367 ************************************ 00:05:33.367 START TEST skip_rpc_with_delay 00:05:33.367 ************************************ 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:33.367 14:01:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:33.367 [2024-10-13 14:01:37.029856] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:33.367 14:01:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:33.367 14:01:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:33.367 14:01:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:33.367 14:01:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:33.367 00:05:33.367 real 0m0.076s 00:05:33.367 user 0m0.048s 00:05:33.367 sys 0m0.028s 00:05:33.367 14:01:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.367 14:01:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:33.367 ************************************ 00:05:33.367 END TEST skip_rpc_with_delay 00:05:33.367 ************************************ 00:05:33.627 14:01:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:33.627 14:01:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:33.627 14:01:37 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:33.627 14:01:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.627 14:01:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.627 14:01:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.627 ************************************ 00:05:33.627 START TEST exit_on_failed_rpc_init 00:05:33.627 ************************************ 00:05:33.627 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:33.627 14:01:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1458762 00:05:33.627 14:01:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1458762 00:05:33.627 14:01:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.627 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1458762 ']' 00:05:33.627 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.627 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.627 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.627 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.627 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.627 [2024-10-13 14:01:37.185712] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:05:33.627 [2024-10-13 14:01:37.185770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458762 ] 00:05:33.627 [2024-10-13 14:01:37.320532] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:33.887 [2024-10-13 14:01:37.368817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.887 [2024-10-13 14:01:37.393169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.459 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.459 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:34.459 14:01:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.459 14:01:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.459 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:34.459 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.459 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.459 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.459 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.459 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.459 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.459 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.459 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.459 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:34.459 14:01:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:34.459 [2024-10-13 14:01:38.041094] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:05:34.459 [2024-10-13 14:01:38.041145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459097 ] 00:05:34.719 [2024-10-13 14:01:38.170922] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:34.719 [2024-10-13 14:01:38.220115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.719 [2024-10-13 14:01:38.238020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.719 [2024-10-13 14:01:38.238089] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:34.719 [2024-10-13 14:01:38.238099] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:34.719 [2024-10-13 14:01:38.238106] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:34.719 14:01:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:34.719 14:01:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:34.719 14:01:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:34.719 14:01:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:34.719 14:01:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:34.719 14:01:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:34.719 14:01:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:34.719 14:01:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1458762 00:05:34.719 14:01:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1458762 ']' 00:05:34.719 14:01:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1458762 00:05:34.719 14:01:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:34.719 14:01:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.719 14:01:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1458762 00:05:34.719 14:01:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.719 14:01:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.719 14:01:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1458762' 00:05:34.719 killing process with pid 1458762 00:05:34.719 14:01:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1458762 00:05:34.719 14:01:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1458762 00:05:34.980 00:05:34.980 real 0m1.384s 00:05:34.980 user 0m1.470s 00:05:34.980 sys 0m0.393s 00:05:34.980 14:01:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.980 14:01:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:34.980 ************************************ 00:05:34.980 END TEST exit_on_failed_rpc_init 00:05:34.980 ************************************ 00:05:34.980 14:01:38 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:34.980 00:05:34.980 real 0m13.766s 00:05:34.980 user 0m12.893s 00:05:34.980 sys 0m1.593s 00:05:34.980 14:01:38 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.980 14:01:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.980 ************************************ 00:05:34.980 END TEST skip_rpc 00:05:34.980 ************************************ 00:05:34.980 14:01:38 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:34.980 14:01:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.980 14:01:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.980 14:01:38 -- common/autotest_common.sh@10 -- # set +x 00:05:34.980 ************************************ 00:05:34.980 START TEST rpc_client 00:05:34.980 ************************************ 00:05:34.980 14:01:38 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:35.241 * Looking for test storage... 00:05:35.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:35.241 14:01:38 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:35.241 14:01:38 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:35.241 14:01:38 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:35.241 14:01:38 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:35.241 14:01:38 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.242 14:01:38 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.242 14:01:38 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.242 14:01:38 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:35.242 14:01:38 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.242 14:01:38 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:35.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.242 --rc genhtml_branch_coverage=1 00:05:35.242 --rc genhtml_function_coverage=1 00:05:35.242 --rc genhtml_legend=1 00:05:35.242 --rc geninfo_all_blocks=1 00:05:35.242 --rc geninfo_unexecuted_blocks=1 00:05:35.242 00:05:35.242 ' 00:05:35.242 14:01:38 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:35.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.242 --rc genhtml_branch_coverage=1 00:05:35.242 --rc genhtml_function_coverage=1 00:05:35.242 --rc genhtml_legend=1 00:05:35.242 --rc geninfo_all_blocks=1 00:05:35.242 --rc geninfo_unexecuted_blocks=1 00:05:35.242 00:05:35.242 ' 00:05:35.242 14:01:38 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:35.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.242 --rc genhtml_branch_coverage=1 00:05:35.242 --rc genhtml_function_coverage=1 00:05:35.242 --rc genhtml_legend=1 00:05:35.242 --rc geninfo_all_blocks=1 00:05:35.242 --rc geninfo_unexecuted_blocks=1 00:05:35.242 00:05:35.242 ' 00:05:35.242 14:01:38 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:35.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.242 --rc genhtml_branch_coverage=1 00:05:35.242 --rc genhtml_function_coverage=1 00:05:35.242 --rc genhtml_legend=1 00:05:35.242 --rc geninfo_all_blocks=1 00:05:35.242 --rc geninfo_unexecuted_blocks=1 00:05:35.242 00:05:35.242 ' 00:05:35.242 14:01:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:35.242 OK 00:05:35.242 14:01:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:35.242 00:05:35.242 real 0m0.228s 00:05:35.242 user 0m0.133s 00:05:35.242 sys 0m0.110s 00:05:35.242 14:01:38 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.242 14:01:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:35.242 ************************************ 00:05:35.242 END TEST rpc_client 00:05:35.242 ************************************ 00:05:35.242 14:01:38 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:35.242 14:01:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.242 14:01:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.242 14:01:38 -- common/autotest_common.sh@10 -- # set +x 00:05:35.242 ************************************ 00:05:35.242 START TEST json_config 00:05:35.242 ************************************ 00:05:35.242 14:01:38 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:35.504 14:01:39 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:35.504 14:01:39 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:35.504 14:01:39 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:35.504 14:01:39 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:35.504 14:01:39 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.504 14:01:39 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.504 14:01:39 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.504 14:01:39 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.504 14:01:39 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.504 14:01:39 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.504 14:01:39 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.504 14:01:39 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.504 14:01:39 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.504 14:01:39 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.504 14:01:39 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.504 14:01:39 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:35.504 14:01:39 json_config -- scripts/common.sh@345 -- # : 1 00:05:35.504 14:01:39 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.504 14:01:39 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.504 14:01:39 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:35.504 14:01:39 json_config -- scripts/common.sh@353 -- # local d=1 00:05:35.504 14:01:39 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.504 14:01:39 json_config -- scripts/common.sh@355 -- # echo 1 00:05:35.504 14:01:39 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.504 14:01:39 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:35.504 14:01:39 json_config -- scripts/common.sh@353 -- # local d=2 00:05:35.504 14:01:39 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.504 14:01:39 json_config -- scripts/common.sh@355 -- # echo 2 00:05:35.504 14:01:39 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.504 14:01:39 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.504 14:01:39 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.504 14:01:39 json_config -- scripts/common.sh@368 -- # return 0 00:05:35.505 14:01:39 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.505 14:01:39 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:35.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.505 --rc genhtml_branch_coverage=1 00:05:35.505 --rc genhtml_function_coverage=1 00:05:35.505 --rc genhtml_legend=1 00:05:35.505 --rc geninfo_all_blocks=1 00:05:35.505 --rc geninfo_unexecuted_blocks=1 00:05:35.505 00:05:35.505 ' 00:05:35.505 14:01:39 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:35.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.505 --rc genhtml_branch_coverage=1 00:05:35.505 --rc genhtml_function_coverage=1 00:05:35.505 --rc genhtml_legend=1 00:05:35.505 --rc geninfo_all_blocks=1 00:05:35.505 --rc geninfo_unexecuted_blocks=1 00:05:35.505 00:05:35.505 ' 00:05:35.505 14:01:39 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:35.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.505 --rc genhtml_branch_coverage=1 00:05:35.505 --rc genhtml_function_coverage=1 00:05:35.505 --rc genhtml_legend=1 00:05:35.505 --rc geninfo_all_blocks=1 00:05:35.505 --rc geninfo_unexecuted_blocks=1 00:05:35.505 00:05:35.505 ' 00:05:35.505 14:01:39 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:35.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.505 --rc genhtml_branch_coverage=1 00:05:35.505 --rc genhtml_function_coverage=1 00:05:35.505 --rc genhtml_legend=1 00:05:35.505 --rc geninfo_all_blocks=1 00:05:35.505 --rc geninfo_unexecuted_blocks=1 00:05:35.505 00:05:35.505 ' 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:35.505 14:01:39 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:35.505 14:01:39 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.505 14:01:39 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.505 14:01:39 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.505 14:01:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.505 14:01:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.505 14:01:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.505 14:01:39 json_config -- paths/export.sh@5 -- # export PATH 00:05:35.505 14:01:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@51 -- # : 0 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:35.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:35.505 14:01:39 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:35.505 INFO: JSON configuration test init 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:35.505 14:01:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:35.505 14:01:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:35.505 14:01:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:35.505 14:01:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.505 14:01:39 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:35.505 14:01:39 json_config -- json_config/common.sh@9 -- # local app=target 00:05:35.505 14:01:39 json_config -- json_config/common.sh@10 -- # shift 00:05:35.505 14:01:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.505 14:01:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.505 14:01:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.505 14:01:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.505 14:01:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.505 14:01:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1459332 00:05:35.505 14:01:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.505 Waiting for target to run... 00:05:35.505 14:01:39 json_config -- json_config/common.sh@25 -- # waitforlisten 1459332 /var/tmp/spdk_tgt.sock 00:05:35.505 14:01:39 json_config -- common/autotest_common.sh@831 -- # '[' -z 1459332 ']' 00:05:35.505 14:01:39 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.505 14:01:39 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:35.505 14:01:39 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.505 14:01:39 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.505 14:01:39 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.505 14:01:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.766 [2024-10-13 14:01:39.228177] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:05:35.767 [2024-10-13 14:01:39.228249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459332 ] 00:05:36.027 [2024-10-13 14:01:39.569185] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:36.027 [2024-10-13 14:01:39.617632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.027 [2024-10-13 14:01:39.628419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.598 14:01:40 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.598 14:01:40 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:36.598 14:01:40 json_config -- json_config/common.sh@26 -- # echo '' 00:05:36.598 00:05:36.598 14:01:40 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:36.598 14:01:40 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:36.598 14:01:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:36.598 14:01:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.598 14:01:40 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:36.598 14:01:40 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:36.598 14:01:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:36.598 14:01:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.598 14:01:40 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:36.598 14:01:40 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:36.598 14:01:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:37.169 14:01:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:37.169 14:01:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:37.169 14:01:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@54 -- # sort 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:37.169 14:01:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:37.169 14:01:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:37.169 14:01:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:37.169 14:01:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:37.169 14:01:40 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:37.169 14:01:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:37.430 MallocForNvmf0 00:05:37.430 14:01:41 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:37.430 14:01:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:37.690 MallocForNvmf1 00:05:37.690 14:01:41 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:37.690 14:01:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:37.690 [2024-10-13 14:01:41.341490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.690 14:01:41 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:37.690 14:01:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:37.949 14:01:41 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:37.950 14:01:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:38.210 14:01:41 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:38.210 14:01:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:38.210 14:01:41 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.210 14:01:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.471 [2024-10-13 14:01:42.049962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:38.471 14:01:42 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:38.471 14:01:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:38.471 14:01:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.471 14:01:42 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:38.471 14:01:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:38.471 14:01:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.471 14:01:42 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:38.471 14:01:42 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:38.471 14:01:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:38.730 MallocBdevForConfigChangeCheck 00:05:38.730 14:01:42 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:38.730 14:01:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:38.730 14:01:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.730 14:01:42 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:38.730 14:01:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.300 14:01:42 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:39.300 INFO: shutting down applications... 00:05:39.300 14:01:42 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:39.300 14:01:42 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:39.300 14:01:42 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:39.300 14:01:42 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:39.560 Calling clear_iscsi_subsystem 00:05:39.560 Calling clear_nvmf_subsystem 00:05:39.560 Calling clear_nbd_subsystem 00:05:39.560 Calling clear_ublk_subsystem 00:05:39.560 Calling clear_vhost_blk_subsystem 00:05:39.560 Calling clear_vhost_scsi_subsystem 00:05:39.560 Calling clear_bdev_subsystem 00:05:39.560 14:01:43 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:39.560 14:01:43 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:39.560 14:01:43 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:39.560 14:01:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.560 14:01:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:39.560 14:01:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:39.821 14:01:43 json_config -- json_config/json_config.sh@352 -- # break 00:05:39.821 14:01:43 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:39.821 14:01:43 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:39.821 14:01:43 json_config -- json_config/common.sh@31 -- # local app=target 00:05:39.821 14:01:43 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:39.821 14:01:43 json_config -- json_config/common.sh@35 -- # [[ -n 1459332 ]] 00:05:39.821 14:01:43 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1459332 00:05:39.821 14:01:43 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:39.821 14:01:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.821 14:01:43 json_config -- json_config/common.sh@41 -- # kill -0 1459332 00:05:39.821 14:01:43 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:40.392 14:01:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:40.392 14:01:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.392 14:01:43 json_config -- json_config/common.sh@41 -- # kill -0 1459332 00:05:40.392 14:01:43 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:40.392 14:01:43 json_config -- json_config/common.sh@43 -- # break 00:05:40.392 14:01:43 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:40.392 14:01:43 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:40.392 SPDK target shutdown done 00:05:40.392 14:01:43 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:40.392 INFO: relaunching applications... 00:05:40.392 14:01:43 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.392 14:01:43 json_config -- json_config/common.sh@9 -- # local app=target 00:05:40.392 14:01:43 json_config -- json_config/common.sh@10 -- # shift 00:05:40.392 14:01:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:40.392 14:01:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:40.392 14:01:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:40.392 14:01:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.392 14:01:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.392 14:01:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1460379 00:05:40.392 14:01:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:40.392 Waiting for target to run... 00:05:40.392 14:01:43 json_config -- json_config/common.sh@25 -- # waitforlisten 1460379 /var/tmp/spdk_tgt.sock 00:05:40.392 14:01:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.392 14:01:43 json_config -- common/autotest_common.sh@831 -- # '[' -z 1460379 ']' 00:05:40.392 14:01:43 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:40.392 14:01:43 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.392 14:01:43 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:40.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:40.392 14:01:43 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.392 14:01:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.392 [2024-10-13 14:01:44.049454] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:05:40.392 [2024-10-13 14:01:44.049511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1460379 ] 00:05:40.963 [2024-10-13 14:01:44.404305] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:40.963 [2024-10-13 14:01:44.454866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.963 [2024-10-13 14:01:44.465968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.533 [2024-10-13 14:01:44.937148] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:41.533 [2024-10-13 14:01:44.969422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:41.533 14:01:45 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.533 14:01:45 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:41.533 14:01:45 json_config -- json_config/common.sh@26 -- # echo '' 00:05:41.533 00:05:41.533 14:01:45 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:41.533 14:01:45 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:41.533 INFO: Checking if target configuration is the same... 00:05:41.533 14:01:45 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.533 14:01:45 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:41.533 14:01:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.533 + '[' 2 -ne 2 ']' 00:05:41.533 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:41.533 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:41.533 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:41.533 +++ basename /dev/fd/62 00:05:41.533 ++ mktemp /tmp/62.XXX 00:05:41.533 + tmp_file_1=/tmp/62.lB9 00:05:41.533 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.533 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:41.533 + tmp_file_2=/tmp/spdk_tgt_config.json.14i 00:05:41.533 + ret=0 00:05:41.533 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.793 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.793 + diff -u /tmp/62.lB9 /tmp/spdk_tgt_config.json.14i 00:05:41.793 + echo 'INFO: JSON config files are the same' 00:05:41.793 INFO: JSON config files are the same 00:05:41.793 + rm /tmp/62.lB9 /tmp/spdk_tgt_config.json.14i 00:05:41.793 + exit 0 00:05:41.793 14:01:45 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:41.793 14:01:45 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:41.793 INFO: changing configuration and checking if this can be detected... 00:05:41.793 14:01:45 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:41.793 14:01:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:42.053 14:01:45 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.053 14:01:45 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:42.053 14:01:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.053 + '[' 2 -ne 2 ']' 00:05:42.053 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:42.053 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:42.053 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:42.053 +++ basename /dev/fd/62 00:05:42.053 ++ mktemp /tmp/62.XXX 00:05:42.053 + tmp_file_1=/tmp/62.nbx 00:05:42.053 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.053 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:42.053 + tmp_file_2=/tmp/spdk_tgt_config.json.epp 00:05:42.053 + ret=0 00:05:42.053 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.314 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.314 + diff -u /tmp/62.nbx /tmp/spdk_tgt_config.json.epp 00:05:42.314 + ret=1 00:05:42.314 + echo '=== Start of file: /tmp/62.nbx ===' 00:05:42.314 + cat /tmp/62.nbx 00:05:42.314 + echo '=== End of file: /tmp/62.nbx ===' 00:05:42.314 + echo '' 00:05:42.314 + echo '=== Start of file: /tmp/spdk_tgt_config.json.epp ===' 00:05:42.314 + cat /tmp/spdk_tgt_config.json.epp 00:05:42.314 + echo '=== End of file: /tmp/spdk_tgt_config.json.epp ===' 00:05:42.314 + echo '' 00:05:42.314 + rm /tmp/62.nbx /tmp/spdk_tgt_config.json.epp 00:05:42.314 + exit 1 00:05:42.314 14:01:45 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:42.314 INFO: configuration change detected. 00:05:42.314 14:01:45 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:42.314 14:01:45 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:42.314 14:01:45 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:42.314 14:01:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.314 14:01:45 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:42.314 14:01:45 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:42.314 14:01:45 json_config -- json_config/json_config.sh@324 -- # [[ -n 1460379 ]] 00:05:42.314 14:01:45 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:42.314 14:01:45 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:42.314 14:01:45 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:42.314 14:01:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.314 14:01:45 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:42.314 14:01:45 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:42.314 14:01:45 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:42.314 14:01:45 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:42.314 14:01:45 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:42.314 14:01:45 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:42.314 14:01:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:42.314 14:01:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.314 14:01:46 json_config -- json_config/json_config.sh@330 -- # killprocess 1460379 00:05:42.314 14:01:46 json_config -- common/autotest_common.sh@950 -- # '[' -z 1460379 ']' 00:05:42.314 14:01:46 json_config -- common/autotest_common.sh@954 -- # kill -0 1460379 00:05:42.575 14:01:46 json_config -- common/autotest_common.sh@955 -- # uname 00:05:42.575 14:01:46 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.575 14:01:46 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1460379 00:05:42.575 14:01:46 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.575 14:01:46 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.575 14:01:46 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1460379' 00:05:42.575 killing process with pid 1460379 00:05:42.575 14:01:46 json_config -- common/autotest_common.sh@969 -- # kill 1460379 00:05:42.575 14:01:46 json_config -- common/autotest_common.sh@974 -- # wait 1460379 00:05:42.836 14:01:46 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.836 14:01:46 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:42.836 14:01:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:42.836 14:01:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.836 14:01:46 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:42.836 14:01:46 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:42.836 INFO: Success 00:05:42.836 00:05:42.836 real 0m7.441s 00:05:42.836 user 0m8.805s 00:05:42.836 sys 0m1.951s 00:05:42.836 14:01:46 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.836 14:01:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.836 ************************************ 00:05:42.836 END TEST json_config 00:05:42.836 ************************************ 00:05:42.836 14:01:46 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:42.836 14:01:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.836 14:01:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.836 14:01:46 -- common/autotest_common.sh@10 -- # set +x 00:05:42.836 ************************************ 00:05:42.836 START TEST json_config_extra_key 00:05:42.836 ************************************ 00:05:42.836 14:01:46 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:42.836 14:01:46 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:42.836 14:01:46 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:42.836 14:01:46 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:43.097 14:01:46 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:43.097 14:01:46 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.097 14:01:46 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.097 14:01:46 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.097 14:01:46 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.097 14:01:46 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.097 14:01:46 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.097 14:01:46 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:43.098 14:01:46 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.098 14:01:46 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:43.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.098 --rc genhtml_branch_coverage=1 00:05:43.098 --rc genhtml_function_coverage=1 00:05:43.098 --rc genhtml_legend=1 00:05:43.098 --rc geninfo_all_blocks=1 00:05:43.098 --rc geninfo_unexecuted_blocks=1 00:05:43.098 00:05:43.098 ' 00:05:43.098 14:01:46 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:43.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.098 --rc genhtml_branch_coverage=1 00:05:43.098 --rc genhtml_function_coverage=1 00:05:43.098 --rc genhtml_legend=1 00:05:43.098 --rc geninfo_all_blocks=1 00:05:43.098 --rc geninfo_unexecuted_blocks=1 00:05:43.098 00:05:43.098 ' 00:05:43.098 14:01:46 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:43.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.098 --rc genhtml_branch_coverage=1 00:05:43.098 --rc genhtml_function_coverage=1 00:05:43.098 --rc genhtml_legend=1 00:05:43.098 --rc geninfo_all_blocks=1 00:05:43.098 --rc geninfo_unexecuted_blocks=1 00:05:43.098 00:05:43.098 ' 00:05:43.098 14:01:46 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:43.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.098 --rc genhtml_branch_coverage=1 00:05:43.098 --rc genhtml_function_coverage=1 00:05:43.098 --rc genhtml_legend=1 00:05:43.098 --rc geninfo_all_blocks=1 00:05:43.098 --rc geninfo_unexecuted_blocks=1 00:05:43.098 00:05:43.098 ' 00:05:43.098 14:01:46 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.098 14:01:46 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.098 14:01:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.098 14:01:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.098 14:01:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.098 14:01:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:43.098 14:01:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:43.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:43.098 14:01:46 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:43.098 14:01:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:43.098 14:01:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:43.098 14:01:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:43.098 14:01:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:43.098 14:01:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:43.098 14:01:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:43.098 14:01:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:43.098 14:01:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:43.098 14:01:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:43.098 14:01:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:43.098 14:01:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:43.098 INFO: launching applications... 00:05:43.098 14:01:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:43.098 14:01:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:43.098 14:01:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:43.098 14:01:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:43.098 14:01:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:43.098 14:01:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:43.098 14:01:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.098 14:01:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.098 14:01:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1461157 00:05:43.098 14:01:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:43.098 Waiting for target to run... 00:05:43.098 14:01:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1461157 /var/tmp/spdk_tgt.sock 00:05:43.098 14:01:46 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1461157 ']' 00:05:43.098 14:01:46 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.098 14:01:46 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.098 14:01:46 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:43.098 14:01:46 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.098 14:01:46 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.098 14:01:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:43.098 [2024-10-13 14:01:46.732647] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:05:43.099 [2024-10-13 14:01:46.732717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1461157 ] 00:05:43.670 [2024-10-13 14:01:47.097027] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:43.670 [2024-10-13 14:01:47.144897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.670 [2024-10-13 14:01:47.158007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.930 14:01:47 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.930 14:01:47 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:43.930 14:01:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:43.930 00:05:43.930 14:01:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:43.930 INFO: shutting down applications... 00:05:43.931 14:01:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:43.931 14:01:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:43.931 14:01:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:43.931 14:01:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1461157 ]] 00:05:43.931 14:01:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1461157 00:05:43.931 14:01:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:43.931 14:01:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:43.931 14:01:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1461157 00:05:43.931 14:01:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:44.501 14:01:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:44.501 14:01:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.501 14:01:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1461157 00:05:44.501 14:01:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:44.501 14:01:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:44.501 14:01:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:44.501 14:01:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:44.501 SPDK target shutdown done 00:05:44.501 14:01:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:44.501 Success 00:05:44.501 00:05:44.501 real 0m1.577s 00:05:44.501 user 0m1.038s 00:05:44.501 sys 0m0.464s 00:05:44.501 14:01:48 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.501 14:01:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:44.501 ************************************ 00:05:44.501 END TEST json_config_extra_key 00:05:44.501 ************************************ 00:05:44.501 14:01:48 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.501 14:01:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.501 14:01:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.501 14:01:48 -- common/autotest_common.sh@10 -- # set +x 00:05:44.501 ************************************ 00:05:44.501 START TEST alias_rpc 00:05:44.501 ************************************ 00:05:44.501 14:01:48 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.501 * Looking for test storage... 00:05:44.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:44.763 14:01:48 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:44.763 14:01:48 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:44.763 14:01:48 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:44.763 14:01:48 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.763 14:01:48 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:44.763 14:01:48 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.763 14:01:48 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:44.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.763 --rc genhtml_branch_coverage=1 00:05:44.763 --rc genhtml_function_coverage=1 00:05:44.763 --rc genhtml_legend=1 00:05:44.763 --rc geninfo_all_blocks=1 00:05:44.763 --rc geninfo_unexecuted_blocks=1 00:05:44.763 00:05:44.763 ' 00:05:44.763 14:01:48 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:44.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.763 --rc genhtml_branch_coverage=1 00:05:44.763 --rc genhtml_function_coverage=1 00:05:44.763 --rc genhtml_legend=1 00:05:44.763 --rc geninfo_all_blocks=1 00:05:44.763 --rc geninfo_unexecuted_blocks=1 00:05:44.763 00:05:44.763 ' 00:05:44.763 14:01:48 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:44.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.763 --rc genhtml_branch_coverage=1 00:05:44.763 --rc genhtml_function_coverage=1 00:05:44.763 --rc genhtml_legend=1 00:05:44.763 --rc geninfo_all_blocks=1 00:05:44.763 --rc geninfo_unexecuted_blocks=1 00:05:44.763 00:05:44.763 ' 00:05:44.763 14:01:48 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:44.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.763 --rc genhtml_branch_coverage=1 00:05:44.763 --rc genhtml_function_coverage=1 00:05:44.763 --rc genhtml_legend=1 00:05:44.763 --rc geninfo_all_blocks=1 00:05:44.763 --rc geninfo_unexecuted_blocks=1 00:05:44.763 00:05:44.763 ' 00:05:44.763 14:01:48 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:44.763 14:01:48 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1461548 00:05:44.763 14:01:48 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1461548 00:05:44.763 14:01:48 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.763 14:01:48 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1461548 ']' 00:05:44.763 14:01:48 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.763 14:01:48 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.763 14:01:48 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.763 14:01:48 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.763 14:01:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.763 [2024-10-13 14:01:48.364270] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:05:44.763 [2024-10-13 14:01:48.364335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1461548 ] 00:05:45.023 [2024-10-13 14:01:48.498250] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:45.023 [2024-10-13 14:01:48.543778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.023 [2024-10-13 14:01:48.560861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.594 14:01:49 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.594 14:01:49 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:45.594 14:01:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:45.865 14:01:49 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1461548 00:05:45.865 14:01:49 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1461548 ']' 00:05:45.865 14:01:49 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1461548 00:05:45.865 14:01:49 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:45.865 14:01:49 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.865 14:01:49 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1461548 00:05:45.865 14:01:49 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.865 14:01:49 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.865 14:01:49 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1461548' 00:05:45.865 killing process with pid 1461548 00:05:45.865 14:01:49 alias_rpc -- common/autotest_common.sh@969 -- # kill 1461548 00:05:45.865 14:01:49 alias_rpc -- common/autotest_common.sh@974 -- # wait 1461548 00:05:46.128 00:05:46.128 real 0m1.475s 00:05:46.128 user 0m1.536s 00:05:46.128 sys 0m0.404s 00:05:46.128 14:01:49 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.128 14:01:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.128 ************************************ 00:05:46.128 END TEST alias_rpc 00:05:46.128 ************************************ 00:05:46.128 14:01:49 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:46.128 14:01:49 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:46.128 14:01:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.128 14:01:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.128 14:01:49 -- common/autotest_common.sh@10 -- # set +x 00:05:46.128 ************************************ 00:05:46.128 START TEST spdkcli_tcp 00:05:46.128 ************************************ 00:05:46.128 14:01:49 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:46.128 * Looking for test storage... 00:05:46.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:46.128 14:01:49 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:46.128 14:01:49 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:46.128 14:01:49 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:46.389 14:01:49 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:46.389 14:01:49 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.389 14:01:49 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.389 14:01:49 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.389 14:01:49 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.389 14:01:49 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.389 14:01:49 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.389 14:01:49 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.389 14:01:49 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.389 14:01:49 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.389 14:01:49 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.389 14:01:49 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.389 14:01:49 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:46.389 14:01:49 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:46.389 14:01:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.389 14:01:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.390 14:01:49 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:46.390 14:01:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:46.390 14:01:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.390 14:01:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:46.390 14:01:49 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.390 14:01:49 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:46.390 14:01:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:46.390 14:01:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.390 14:01:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:46.390 14:01:49 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.390 14:01:49 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.390 14:01:49 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.390 14:01:49 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:46.390 14:01:49 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.390 14:01:49 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:46.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.390 --rc genhtml_branch_coverage=1 00:05:46.390 --rc genhtml_function_coverage=1 00:05:46.390 --rc genhtml_legend=1 00:05:46.390 --rc geninfo_all_blocks=1 00:05:46.390 --rc geninfo_unexecuted_blocks=1 00:05:46.390 00:05:46.390 ' 00:05:46.390 14:01:49 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:46.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.390 --rc genhtml_branch_coverage=1 00:05:46.390 --rc genhtml_function_coverage=1 00:05:46.390 --rc genhtml_legend=1 00:05:46.390 --rc geninfo_all_blocks=1 00:05:46.390 --rc geninfo_unexecuted_blocks=1 00:05:46.390 00:05:46.390 ' 00:05:46.390 14:01:49 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:46.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.390 --rc genhtml_branch_coverage=1 00:05:46.390 --rc genhtml_function_coverage=1 00:05:46.390 --rc genhtml_legend=1 00:05:46.390 --rc geninfo_all_blocks=1 00:05:46.390 --rc geninfo_unexecuted_blocks=1 00:05:46.390 00:05:46.390 ' 00:05:46.390 14:01:49 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:46.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.390 --rc genhtml_branch_coverage=1 00:05:46.390 --rc genhtml_function_coverage=1 00:05:46.390 --rc genhtml_legend=1 00:05:46.390 --rc geninfo_all_blocks=1 00:05:46.390 --rc geninfo_unexecuted_blocks=1 00:05:46.390 00:05:46.390 ' 00:05:46.390 14:01:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:46.390 14:01:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:46.390 14:01:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:46.390 14:01:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:46.390 14:01:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:46.390 14:01:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:46.390 14:01:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:46.390 14:01:49 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:46.390 14:01:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.390 14:01:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1461948 00:05:46.390 14:01:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1461948 00:05:46.390 14:01:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:46.390 14:01:49 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1461948 ']' 00:05:46.390 14:01:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.390 14:01:49 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.390 14:01:49 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.390 14:01:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.390 14:01:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.390 [2024-10-13 14:01:49.926041] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:05:46.390 [2024-10-13 14:01:49.926102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1461948 ] 00:05:46.390 [2024-10-13 14:01:50.057683] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:46.651 [2024-10-13 14:01:50.105266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.651 [2024-10-13 14:01:50.125911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.651 [2024-10-13 14:01:50.125910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.223 14:01:50 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.223 14:01:50 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:47.223 14:01:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1461965 00:05:47.223 14:01:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:47.223 14:01:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:47.223 [ 00:05:47.223 "bdev_malloc_delete", 00:05:47.223 "bdev_malloc_create", 00:05:47.223 "bdev_null_resize", 00:05:47.223 "bdev_null_delete", 00:05:47.223 "bdev_null_create", 00:05:47.223 "bdev_nvme_cuse_unregister", 00:05:47.223 "bdev_nvme_cuse_register", 00:05:47.223 "bdev_opal_new_user", 00:05:47.223 "bdev_opal_set_lock_state", 00:05:47.223 "bdev_opal_delete", 00:05:47.223 "bdev_opal_get_info", 00:05:47.223 "bdev_opal_create", 00:05:47.223 "bdev_nvme_opal_revert", 00:05:47.223 "bdev_nvme_opal_init", 00:05:47.223 "bdev_nvme_send_cmd", 00:05:47.223 "bdev_nvme_set_keys", 00:05:47.223 "bdev_nvme_get_path_iostat", 00:05:47.223 "bdev_nvme_get_mdns_discovery_info", 00:05:47.223 "bdev_nvme_stop_mdns_discovery", 00:05:47.223 "bdev_nvme_start_mdns_discovery", 00:05:47.223 "bdev_nvme_set_multipath_policy", 00:05:47.223 "bdev_nvme_set_preferred_path", 00:05:47.223 "bdev_nvme_get_io_paths", 00:05:47.223 "bdev_nvme_remove_error_injection", 00:05:47.223 "bdev_nvme_add_error_injection", 00:05:47.223 "bdev_nvme_get_discovery_info", 00:05:47.223 "bdev_nvme_stop_discovery", 00:05:47.223 "bdev_nvme_start_discovery", 00:05:47.223 "bdev_nvme_get_controller_health_info", 00:05:47.223 "bdev_nvme_disable_controller", 00:05:47.223 "bdev_nvme_enable_controller", 00:05:47.223 "bdev_nvme_reset_controller", 00:05:47.223 "bdev_nvme_get_transport_statistics", 00:05:47.223 "bdev_nvme_apply_firmware", 00:05:47.223 "bdev_nvme_detach_controller", 00:05:47.223 "bdev_nvme_get_controllers", 00:05:47.223 "bdev_nvme_attach_controller", 00:05:47.223 "bdev_nvme_set_hotplug", 00:05:47.223 "bdev_nvme_set_options", 00:05:47.223 "bdev_passthru_delete", 00:05:47.223 "bdev_passthru_create", 00:05:47.223 "bdev_lvol_set_parent_bdev", 00:05:47.223 "bdev_lvol_set_parent", 00:05:47.223 "bdev_lvol_check_shallow_copy", 00:05:47.223 "bdev_lvol_start_shallow_copy", 00:05:47.223 "bdev_lvol_grow_lvstore", 00:05:47.223 "bdev_lvol_get_lvols", 00:05:47.223 "bdev_lvol_get_lvstores", 00:05:47.223 "bdev_lvol_delete", 00:05:47.223 "bdev_lvol_set_read_only", 00:05:47.223 "bdev_lvol_resize", 00:05:47.223 "bdev_lvol_decouple_parent", 00:05:47.223 "bdev_lvol_inflate", 00:05:47.223 "bdev_lvol_rename", 00:05:47.223 "bdev_lvol_clone_bdev", 00:05:47.223 "bdev_lvol_clone", 00:05:47.223 "bdev_lvol_snapshot", 00:05:47.223 "bdev_lvol_create", 00:05:47.223 "bdev_lvol_delete_lvstore", 00:05:47.223 "bdev_lvol_rename_lvstore", 00:05:47.223 "bdev_lvol_create_lvstore", 00:05:47.223 "bdev_raid_set_options", 00:05:47.223 "bdev_raid_remove_base_bdev", 00:05:47.223 "bdev_raid_add_base_bdev", 00:05:47.223 "bdev_raid_delete", 00:05:47.223 "bdev_raid_create", 00:05:47.223 "bdev_raid_get_bdevs", 00:05:47.223 "bdev_error_inject_error", 00:05:47.223 "bdev_error_delete", 00:05:47.223 "bdev_error_create", 00:05:47.223 "bdev_split_delete", 00:05:47.223 "bdev_split_create", 00:05:47.223 "bdev_delay_delete", 00:05:47.223 "bdev_delay_create", 00:05:47.223 "bdev_delay_update_latency", 00:05:47.223 "bdev_zone_block_delete", 00:05:47.223 "bdev_zone_block_create", 00:05:47.223 "blobfs_create", 00:05:47.223 "blobfs_detect", 00:05:47.223 "blobfs_set_cache_size", 00:05:47.223 "bdev_aio_delete", 00:05:47.223 "bdev_aio_rescan", 00:05:47.223 "bdev_aio_create", 00:05:47.223 "bdev_ftl_set_property", 00:05:47.223 "bdev_ftl_get_properties", 00:05:47.224 "bdev_ftl_get_stats", 00:05:47.224 "bdev_ftl_unmap", 00:05:47.224 "bdev_ftl_unload", 00:05:47.224 "bdev_ftl_delete", 00:05:47.224 "bdev_ftl_load", 00:05:47.224 "bdev_ftl_create", 00:05:47.224 "bdev_virtio_attach_controller", 00:05:47.224 "bdev_virtio_scsi_get_devices", 00:05:47.224 "bdev_virtio_detach_controller", 00:05:47.224 "bdev_virtio_blk_set_hotplug", 00:05:47.224 "bdev_iscsi_delete", 00:05:47.224 "bdev_iscsi_create", 00:05:47.224 "bdev_iscsi_set_options", 00:05:47.224 "accel_error_inject_error", 00:05:47.224 "ioat_scan_accel_module", 00:05:47.224 "dsa_scan_accel_module", 00:05:47.224 "iaa_scan_accel_module", 00:05:47.224 "vfu_virtio_create_fs_endpoint", 00:05:47.224 "vfu_virtio_create_scsi_endpoint", 00:05:47.224 "vfu_virtio_scsi_remove_target", 00:05:47.224 "vfu_virtio_scsi_add_target", 00:05:47.224 "vfu_virtio_create_blk_endpoint", 00:05:47.224 "vfu_virtio_delete_endpoint", 00:05:47.224 "keyring_file_remove_key", 00:05:47.224 "keyring_file_add_key", 00:05:47.224 "keyring_linux_set_options", 00:05:47.224 "fsdev_aio_delete", 00:05:47.224 "fsdev_aio_create", 00:05:47.224 "iscsi_get_histogram", 00:05:47.224 "iscsi_enable_histogram", 00:05:47.224 "iscsi_set_options", 00:05:47.224 "iscsi_get_auth_groups", 00:05:47.224 "iscsi_auth_group_remove_secret", 00:05:47.224 "iscsi_auth_group_add_secret", 00:05:47.224 "iscsi_delete_auth_group", 00:05:47.224 "iscsi_create_auth_group", 00:05:47.224 "iscsi_set_discovery_auth", 00:05:47.224 "iscsi_get_options", 00:05:47.224 "iscsi_target_node_request_logout", 00:05:47.224 "iscsi_target_node_set_redirect", 00:05:47.224 "iscsi_target_node_set_auth", 00:05:47.224 "iscsi_target_node_add_lun", 00:05:47.224 "iscsi_get_stats", 00:05:47.224 "iscsi_get_connections", 00:05:47.224 "iscsi_portal_group_set_auth", 00:05:47.224 "iscsi_start_portal_group", 00:05:47.224 "iscsi_delete_portal_group", 00:05:47.224 "iscsi_create_portal_group", 00:05:47.224 "iscsi_get_portal_groups", 00:05:47.224 "iscsi_delete_target_node", 00:05:47.224 "iscsi_target_node_remove_pg_ig_maps", 00:05:47.224 "iscsi_target_node_add_pg_ig_maps", 00:05:47.224 "iscsi_create_target_node", 00:05:47.224 "iscsi_get_target_nodes", 00:05:47.224 "iscsi_delete_initiator_group", 00:05:47.224 "iscsi_initiator_group_remove_initiators", 00:05:47.224 "iscsi_initiator_group_add_initiators", 00:05:47.224 "iscsi_create_initiator_group", 00:05:47.224 "iscsi_get_initiator_groups", 00:05:47.224 "nvmf_set_crdt", 00:05:47.224 "nvmf_set_config", 00:05:47.224 "nvmf_set_max_subsystems", 00:05:47.224 "nvmf_stop_mdns_prr", 00:05:47.224 "nvmf_publish_mdns_prr", 00:05:47.224 "nvmf_subsystem_get_listeners", 00:05:47.224 "nvmf_subsystem_get_qpairs", 00:05:47.224 "nvmf_subsystem_get_controllers", 00:05:47.224 "nvmf_get_stats", 00:05:47.224 "nvmf_get_transports", 00:05:47.224 "nvmf_create_transport", 00:05:47.224 "nvmf_get_targets", 00:05:47.224 "nvmf_delete_target", 00:05:47.224 "nvmf_create_target", 00:05:47.224 "nvmf_subsystem_allow_any_host", 00:05:47.224 "nvmf_subsystem_set_keys", 00:05:47.224 "nvmf_subsystem_remove_host", 00:05:47.224 "nvmf_subsystem_add_host", 00:05:47.224 "nvmf_ns_remove_host", 00:05:47.224 "nvmf_ns_add_host", 00:05:47.224 "nvmf_subsystem_remove_ns", 00:05:47.224 "nvmf_subsystem_set_ns_ana_group", 00:05:47.224 "nvmf_subsystem_add_ns", 00:05:47.224 "nvmf_subsystem_listener_set_ana_state", 00:05:47.224 "nvmf_discovery_get_referrals", 00:05:47.224 "nvmf_discovery_remove_referral", 00:05:47.224 "nvmf_discovery_add_referral", 00:05:47.224 "nvmf_subsystem_remove_listener", 00:05:47.224 "nvmf_subsystem_add_listener", 00:05:47.224 "nvmf_delete_subsystem", 00:05:47.224 "nvmf_create_subsystem", 00:05:47.224 "nvmf_get_subsystems", 00:05:47.224 "env_dpdk_get_mem_stats", 00:05:47.224 "nbd_get_disks", 00:05:47.224 "nbd_stop_disk", 00:05:47.224 "nbd_start_disk", 00:05:47.224 "ublk_recover_disk", 00:05:47.224 "ublk_get_disks", 00:05:47.224 "ublk_stop_disk", 00:05:47.224 "ublk_start_disk", 00:05:47.224 "ublk_destroy_target", 00:05:47.224 "ublk_create_target", 00:05:47.224 "virtio_blk_create_transport", 00:05:47.224 "virtio_blk_get_transports", 00:05:47.224 "vhost_controller_set_coalescing", 00:05:47.224 "vhost_get_controllers", 00:05:47.224 "vhost_delete_controller", 00:05:47.224 "vhost_create_blk_controller", 00:05:47.224 "vhost_scsi_controller_remove_target", 00:05:47.224 "vhost_scsi_controller_add_target", 00:05:47.224 "vhost_start_scsi_controller", 00:05:47.224 "vhost_create_scsi_controller", 00:05:47.224 "thread_set_cpumask", 00:05:47.224 "scheduler_set_options", 00:05:47.224 "framework_get_governor", 00:05:47.224 "framework_get_scheduler", 00:05:47.224 "framework_set_scheduler", 00:05:47.224 "framework_get_reactors", 00:05:47.224 "thread_get_io_channels", 00:05:47.224 "thread_get_pollers", 00:05:47.224 "thread_get_stats", 00:05:47.224 "framework_monitor_context_switch", 00:05:47.224 "spdk_kill_instance", 00:05:47.224 "log_enable_timestamps", 00:05:47.224 "log_get_flags", 00:05:47.224 "log_clear_flag", 00:05:47.224 "log_set_flag", 00:05:47.224 "log_get_level", 00:05:47.224 "log_set_level", 00:05:47.224 "log_get_print_level", 00:05:47.224 "log_set_print_level", 00:05:47.224 "framework_enable_cpumask_locks", 00:05:47.224 "framework_disable_cpumask_locks", 00:05:47.224 "framework_wait_init", 00:05:47.224 "framework_start_init", 00:05:47.224 "scsi_get_devices", 00:05:47.224 "bdev_get_histogram", 00:05:47.224 "bdev_enable_histogram", 00:05:47.224 "bdev_set_qos_limit", 00:05:47.224 "bdev_set_qd_sampling_period", 00:05:47.224 "bdev_get_bdevs", 00:05:47.224 "bdev_reset_iostat", 00:05:47.224 "bdev_get_iostat", 00:05:47.224 "bdev_examine", 00:05:47.224 "bdev_wait_for_examine", 00:05:47.224 "bdev_set_options", 00:05:47.224 "accel_get_stats", 00:05:47.224 "accel_set_options", 00:05:47.224 "accel_set_driver", 00:05:47.224 "accel_crypto_key_destroy", 00:05:47.224 "accel_crypto_keys_get", 00:05:47.224 "accel_crypto_key_create", 00:05:47.224 "accel_assign_opc", 00:05:47.224 "accel_get_module_info", 00:05:47.224 "accel_get_opc_assignments", 00:05:47.224 "vmd_rescan", 00:05:47.224 "vmd_remove_device", 00:05:47.224 "vmd_enable", 00:05:47.224 "sock_get_default_impl", 00:05:47.224 "sock_set_default_impl", 00:05:47.224 "sock_impl_set_options", 00:05:47.224 "sock_impl_get_options", 00:05:47.224 "iobuf_get_stats", 00:05:47.224 "iobuf_set_options", 00:05:47.224 "keyring_get_keys", 00:05:47.224 "vfu_tgt_set_base_path", 00:05:47.224 "framework_get_pci_devices", 00:05:47.224 "framework_get_config", 00:05:47.224 "framework_get_subsystems", 00:05:47.224 "fsdev_set_opts", 00:05:47.224 "fsdev_get_opts", 00:05:47.224 "trace_get_info", 00:05:47.224 "trace_get_tpoint_group_mask", 00:05:47.224 "trace_disable_tpoint_group", 00:05:47.224 "trace_enable_tpoint_group", 00:05:47.224 "trace_clear_tpoint_mask", 00:05:47.224 "trace_set_tpoint_mask", 00:05:47.224 "notify_get_notifications", 00:05:47.224 "notify_get_types", 00:05:47.224 "spdk_get_version", 00:05:47.224 "rpc_get_methods" 00:05:47.224 ] 00:05:47.224 14:01:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:47.224 14:01:50 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:47.224 14:01:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.485 14:01:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:47.485 14:01:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1461948 00:05:47.485 14:01:50 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1461948 ']' 00:05:47.485 14:01:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1461948 00:05:47.485 14:01:50 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:47.485 14:01:50 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.485 14:01:50 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1461948 00:05:47.485 14:01:51 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.485 14:01:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.485 14:01:51 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1461948' 00:05:47.485 killing process with pid 1461948 00:05:47.485 14:01:51 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1461948 00:05:47.485 14:01:51 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1461948 00:05:47.746 00:05:47.746 real 0m1.531s 00:05:47.746 user 0m2.668s 00:05:47.746 sys 0m0.433s 00:05:47.746 14:01:51 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.746 14:01:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.746 ************************************ 00:05:47.746 END TEST spdkcli_tcp 00:05:47.746 ************************************ 00:05:47.746 14:01:51 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.746 14:01:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.746 14:01:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.746 14:01:51 -- common/autotest_common.sh@10 -- # set +x 00:05:47.746 ************************************ 00:05:47.746 START TEST dpdk_mem_utility 00:05:47.746 ************************************ 00:05:47.746 14:01:51 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.746 * Looking for test storage... 00:05:47.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:47.746 14:01:51 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:47.746 14:01:51 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:47.746 14:01:51 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:48.007 14:01:51 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.007 14:01:51 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:48.007 14:01:51 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.007 14:01:51 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:48.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.007 --rc genhtml_branch_coverage=1 00:05:48.007 --rc genhtml_function_coverage=1 00:05:48.007 --rc genhtml_legend=1 00:05:48.007 --rc geninfo_all_blocks=1 00:05:48.007 --rc geninfo_unexecuted_blocks=1 00:05:48.007 00:05:48.007 ' 00:05:48.007 14:01:51 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:48.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.007 --rc genhtml_branch_coverage=1 00:05:48.007 --rc genhtml_function_coverage=1 00:05:48.007 --rc genhtml_legend=1 00:05:48.007 --rc geninfo_all_blocks=1 00:05:48.007 --rc geninfo_unexecuted_blocks=1 00:05:48.007 00:05:48.007 ' 00:05:48.007 14:01:51 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:48.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.007 --rc genhtml_branch_coverage=1 00:05:48.007 --rc genhtml_function_coverage=1 00:05:48.007 --rc genhtml_legend=1 00:05:48.007 --rc geninfo_all_blocks=1 00:05:48.007 --rc geninfo_unexecuted_blocks=1 00:05:48.007 00:05:48.007 ' 00:05:48.007 14:01:51 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:48.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.007 --rc genhtml_branch_coverage=1 00:05:48.007 --rc genhtml_function_coverage=1 00:05:48.007 --rc genhtml_legend=1 00:05:48.007 --rc geninfo_all_blocks=1 00:05:48.007 --rc geninfo_unexecuted_blocks=1 00:05:48.007 00:05:48.007 ' 00:05:48.007 14:01:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:48.007 14:01:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1462361 00:05:48.007 14:01:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1462361 00:05:48.007 14:01:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.007 14:01:51 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1462361 ']' 00:05:48.007 14:01:51 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.007 14:01:51 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.007 14:01:51 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.007 14:01:51 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.007 14:01:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.007 [2024-10-13 14:01:51.545824] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:05:48.007 [2024-10-13 14:01:51.545882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462361 ] 00:05:48.007 [2024-10-13 14:01:51.678704] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:48.268 [2024-10-13 14:01:51.725809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.268 [2024-10-13 14:01:51.749532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.839 14:01:52 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.839 14:01:52 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:48.839 14:01:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:48.839 14:01:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:48.839 14:01:52 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.839 14:01:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.839 { 00:05:48.839 "filename": "/tmp/spdk_mem_dump.txt" 00:05:48.839 } 00:05:48.839 14:01:52 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.839 14:01:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:48.839 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:48.839 1 heaps totaling size 810.000000 MiB 00:05:48.839 size: 810.000000 MiB heap id: 0 00:05:48.839 end heaps---------- 00:05:48.839 9 mempools totaling size 595.772034 MiB 00:05:48.839 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:48.839 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:48.839 size: 92.545471 MiB name: bdev_io_1462361 00:05:48.839 size: 50.003479 MiB name: msgpool_1462361 00:05:48.839 size: 36.509338 MiB name: fsdev_io_1462361 00:05:48.839 size: 21.763794 MiB name: PDU_Pool 00:05:48.839 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:48.839 size: 4.133484 MiB name: evtpool_1462361 00:05:48.839 size: 0.026123 MiB name: Session_Pool 00:05:48.839 end mempools------- 00:05:48.839 6 memzones totaling size 4.142822 MiB 00:05:48.839 size: 1.000366 MiB name: RG_ring_0_1462361 00:05:48.839 size: 1.000366 MiB name: RG_ring_1_1462361 00:05:48.839 size: 1.000366 MiB name: RG_ring_4_1462361 00:05:48.839 size: 1.000366 MiB name: RG_ring_5_1462361 00:05:48.839 size: 0.125366 MiB name: RG_ring_2_1462361 00:05:48.839 size: 0.015991 MiB name: RG_ring_3_1462361 00:05:48.839 end memzones------- 00:05:48.839 14:01:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:48.839 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:48.839 list of free elements. size: 10.737488 MiB 00:05:48.839 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:48.839 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:48.839 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:48.839 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:48.839 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:48.839 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:48.839 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:48.839 element at address: 0x200000200000 with size: 0.592346 MiB 00:05:48.839 element at address: 0x20001a600000 with size: 0.582886 MiB 00:05:48.839 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:48.839 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:48.839 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:48.839 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:48.839 element at address: 0x200027a00000 with size: 0.410034 MiB 00:05:48.839 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:48.839 list of standard malloc elements. size: 199.343628 MiB 00:05:48.839 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:48.839 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:48.839 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:48.839 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:48.839 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:48.839 element at address: 0x2000003b9f00 with size: 0.265747 MiB 00:05:48.839 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:48.839 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:48.839 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:48.839 element at address: 0x2000002b7c40 with size: 0.000183 MiB 00:05:48.839 element at address: 0x2000003b9e40 with size: 0.000183 MiB 00:05:48.839 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:48.839 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:48.839 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:48.839 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:48.839 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:48.839 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:48.839 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:48.839 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:48.839 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:48.839 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:48.839 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:48.839 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:48.839 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:48.839 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:48.839 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:48.839 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:48.839 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:48.839 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:48.839 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:48.839 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:48.839 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:48.839 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:48.839 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:48.839 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:48.839 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:48.839 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:48.839 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:48.839 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:48.839 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:05:48.839 element at address: 0x200027a69040 with size: 0.000183 MiB 00:05:48.839 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:05:48.839 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:48.839 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:48.839 list of memzone associated elements. size: 599.918884 MiB 00:05:48.839 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:48.839 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:48.839 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:48.839 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:48.839 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:48.839 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1462361_0 00:05:48.839 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:48.839 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1462361_0 00:05:48.839 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:48.839 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1462361_0 00:05:48.839 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:48.839 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:48.839 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:48.839 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:48.839 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:48.839 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1462361_0 00:05:48.839 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:48.839 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1462361 00:05:48.839 element at address: 0x2000002b7d00 with size: 1.008118 MiB 00:05:48.839 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1462361 00:05:48.839 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:48.839 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:48.839 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:48.839 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:48.839 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:48.839 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:48.839 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:48.839 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:48.840 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:48.840 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1462361 00:05:48.840 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:48.840 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1462361 00:05:48.840 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:48.840 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1462361 00:05:48.840 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:48.840 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1462361 00:05:48.840 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:48.840 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1462361 00:05:48.840 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:48.840 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1462361 00:05:48.840 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:48.840 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:48.840 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:48.840 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:48.840 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:48.840 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:48.840 element at address: 0x200000297a40 with size: 0.125488 MiB 00:05:48.840 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1462361 00:05:48.840 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:48.840 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1462361 00:05:48.840 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:48.840 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:48.840 element at address: 0x200027a69100 with size: 0.023743 MiB 00:05:48.840 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:48.840 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:48.840 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1462361 00:05:48.840 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:05:48.840 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:48.840 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:48.840 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1462361 00:05:48.840 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:48.840 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1462361 00:05:48.840 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:48.840 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1462361 00:05:48.840 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:05:48.840 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:48.840 14:01:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:48.840 14:01:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1462361 00:05:48.840 14:01:52 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1462361 ']' 00:05:48.840 14:01:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1462361 00:05:48.840 14:01:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:48.840 14:01:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.840 14:01:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1462361 00:05:48.840 14:01:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.840 14:01:52 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.840 14:01:52 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1462361' 00:05:48.840 killing process with pid 1462361 00:05:48.840 14:01:52 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1462361 00:05:48.840 14:01:52 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1462361 00:05:49.101 00:05:49.101 real 0m1.399s 00:05:49.101 user 0m1.373s 00:05:49.101 sys 0m0.421s 00:05:49.101 14:01:52 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.101 14:01:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.101 ************************************ 00:05:49.101 END TEST dpdk_mem_utility 00:05:49.101 ************************************ 00:05:49.101 14:01:52 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:49.101 14:01:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.101 14:01:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.101 14:01:52 -- common/autotest_common.sh@10 -- # set +x 00:05:49.101 ************************************ 00:05:49.101 START TEST event 00:05:49.101 ************************************ 00:05:49.101 14:01:52 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:49.361 * Looking for test storage... 00:05:49.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:49.361 14:01:52 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:49.361 14:01:52 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:49.361 14:01:52 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:49.361 14:01:52 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:49.361 14:01:52 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.361 14:01:52 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.361 14:01:52 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.361 14:01:52 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.361 14:01:52 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.361 14:01:52 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.361 14:01:52 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.361 14:01:52 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.361 14:01:52 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.361 14:01:52 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.361 14:01:52 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.361 14:01:52 event -- scripts/common.sh@344 -- # case "$op" in 00:05:49.361 14:01:52 event -- scripts/common.sh@345 -- # : 1 00:05:49.361 14:01:52 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.361 14:01:52 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.361 14:01:52 event -- scripts/common.sh@365 -- # decimal 1 00:05:49.361 14:01:52 event -- scripts/common.sh@353 -- # local d=1 00:05:49.361 14:01:52 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.361 14:01:52 event -- scripts/common.sh@355 -- # echo 1 00:05:49.361 14:01:52 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.361 14:01:52 event -- scripts/common.sh@366 -- # decimal 2 00:05:49.361 14:01:52 event -- scripts/common.sh@353 -- # local d=2 00:05:49.361 14:01:52 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.361 14:01:52 event -- scripts/common.sh@355 -- # echo 2 00:05:49.361 14:01:52 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.361 14:01:52 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.361 14:01:52 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.361 14:01:52 event -- scripts/common.sh@368 -- # return 0 00:05:49.361 14:01:52 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.361 14:01:52 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:49.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.361 --rc genhtml_branch_coverage=1 00:05:49.361 --rc genhtml_function_coverage=1 00:05:49.361 --rc genhtml_legend=1 00:05:49.361 --rc geninfo_all_blocks=1 00:05:49.361 --rc geninfo_unexecuted_blocks=1 00:05:49.362 00:05:49.362 ' 00:05:49.362 14:01:52 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:49.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.362 --rc genhtml_branch_coverage=1 00:05:49.362 --rc genhtml_function_coverage=1 00:05:49.362 --rc genhtml_legend=1 00:05:49.362 --rc geninfo_all_blocks=1 00:05:49.362 --rc geninfo_unexecuted_blocks=1 00:05:49.362 00:05:49.362 ' 00:05:49.362 14:01:52 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:49.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.362 --rc genhtml_branch_coverage=1 00:05:49.362 --rc genhtml_function_coverage=1 00:05:49.362 --rc genhtml_legend=1 00:05:49.362 --rc geninfo_all_blocks=1 00:05:49.362 --rc geninfo_unexecuted_blocks=1 00:05:49.362 00:05:49.362 ' 00:05:49.362 14:01:52 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:49.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.362 --rc genhtml_branch_coverage=1 00:05:49.362 --rc genhtml_function_coverage=1 00:05:49.362 --rc genhtml_legend=1 00:05:49.362 --rc geninfo_all_blocks=1 00:05:49.362 --rc geninfo_unexecuted_blocks=1 00:05:49.362 00:05:49.362 ' 00:05:49.362 14:01:52 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:49.362 14:01:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:49.362 14:01:52 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.362 14:01:52 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:49.362 14:01:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.362 14:01:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.362 ************************************ 00:05:49.362 START TEST event_perf 00:05:49.362 ************************************ 00:05:49.362 14:01:52 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.362 Running I/O for 1 seconds...[2024-10-13 14:01:53.014335] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:05:49.362 [2024-10-13 14:01:53.014415] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462698 ] 00:05:49.622 [2024-10-13 14:01:53.152481] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:49.622 [2024-10-13 14:01:53.198075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.622 [2024-10-13 14:01:53.218232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.622 [2024-10-13 14:01:53.218439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.622 [2024-10-13 14:01:53.218764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.622 [2024-10-13 14:01:53.218764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.565 Running I/O for 1 seconds... 00:05:50.565 lcore 0: 175150 00:05:50.565 lcore 1: 175153 00:05:50.565 lcore 2: 175150 00:05:50.565 lcore 3: 175151 00:05:50.565 done. 00:05:50.565 00:05:50.565 real 0m1.246s 00:05:50.565 user 0m4.050s 00:05:50.565 sys 0m0.087s 00:05:50.565 14:01:54 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.565 14:01:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.565 ************************************ 00:05:50.565 END TEST event_perf 00:05:50.565 ************************************ 00:05:50.826 14:01:54 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:50.826 14:01:54 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:50.826 14:01:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.826 14:01:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.826 ************************************ 00:05:50.826 START TEST event_reactor 00:05:50.826 ************************************ 00:05:50.826 14:01:54 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:50.826 [2024-10-13 14:01:54.337288] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:05:50.826 [2024-10-13 14:01:54.337366] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462874 ] 00:05:50.826 [2024-10-13 14:01:54.473035] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:50.826 [2024-10-13 14:01:54.521191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.087 [2024-10-13 14:01:54.537898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.029 test_start 00:05:52.029 oneshot 00:05:52.029 tick 100 00:05:52.029 tick 100 00:05:52.029 tick 250 00:05:52.029 tick 100 00:05:52.029 tick 100 00:05:52.029 tick 250 00:05:52.029 tick 100 00:05:52.029 tick 500 00:05:52.029 tick 100 00:05:52.029 tick 100 00:05:52.029 tick 250 00:05:52.029 tick 100 00:05:52.029 tick 100 00:05:52.029 test_end 00:05:52.029 00:05:52.029 real 0m1.240s 00:05:52.029 user 0m1.060s 00:05:52.029 sys 0m0.077s 00:05:52.029 14:01:55 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.029 14:01:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:52.029 ************************************ 00:05:52.029 END TEST event_reactor 00:05:52.029 ************************************ 00:05:52.029 14:01:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.029 14:01:55 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:52.029 14:01:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.029 14:01:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.029 ************************************ 00:05:52.029 START TEST event_reactor_perf 00:05:52.029 ************************************ 00:05:52.029 14:01:55 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.029 [2024-10-13 14:01:55.654714] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:05:52.029 [2024-10-13 14:01:55.654803] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463155 ] 00:05:52.290 [2024-10-13 14:01:55.788510] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:52.290 [2024-10-13 14:01:55.835556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.290 [2024-10-13 14:01:55.852014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.232 test_start 00:05:53.232 test_end 00:05:53.232 Performance: 533376 events per second 00:05:53.232 00:05:53.232 real 0m1.236s 00:05:53.232 user 0m1.064s 00:05:53.232 sys 0m0.068s 00:05:53.232 14:01:56 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.232 14:01:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.232 ************************************ 00:05:53.232 END TEST event_reactor_perf 00:05:53.232 ************************************ 00:05:53.232 14:01:56 event -- event/event.sh@49 -- # uname -s 00:05:53.232 14:01:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:53.232 14:01:56 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:53.232 14:01:56 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.232 14:01:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.232 14:01:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.493 ************************************ 00:05:53.493 START TEST event_scheduler 00:05:53.493 ************************************ 00:05:53.493 14:01:56 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:53.493 * Looking for test storage... 00:05:53.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:53.493 14:01:57 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:53.493 14:01:57 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:53.493 14:01:57 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:53.493 14:01:57 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:53.493 14:01:57 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.493 14:01:57 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.493 14:01:57 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.493 14:01:57 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.493 14:01:57 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.493 14:01:57 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.493 14:01:57 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.493 14:01:57 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.493 14:01:57 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.493 14:01:57 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.493 14:01:57 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.493 14:01:57 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:53.493 14:01:57 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:53.493 14:01:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.493 14:01:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.493 14:01:57 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:53.493 14:01:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:53.493 14:01:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.494 14:01:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:53.494 14:01:57 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.494 14:01:57 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:53.494 14:01:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:53.494 14:01:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.494 14:01:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:53.494 14:01:57 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.494 14:01:57 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.494 14:01:57 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.494 14:01:57 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:53.494 14:01:57 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.494 14:01:57 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:53.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.494 --rc genhtml_branch_coverage=1 00:05:53.494 --rc genhtml_function_coverage=1 00:05:53.494 --rc genhtml_legend=1 00:05:53.494 --rc geninfo_all_blocks=1 00:05:53.494 --rc geninfo_unexecuted_blocks=1 00:05:53.494 00:05:53.494 ' 00:05:53.494 14:01:57 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:53.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.494 --rc genhtml_branch_coverage=1 00:05:53.494 --rc genhtml_function_coverage=1 00:05:53.494 --rc genhtml_legend=1 00:05:53.494 --rc geninfo_all_blocks=1 00:05:53.494 --rc geninfo_unexecuted_blocks=1 00:05:53.494 00:05:53.494 ' 00:05:53.494 14:01:57 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:53.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.494 --rc genhtml_branch_coverage=1 00:05:53.494 --rc genhtml_function_coverage=1 00:05:53.494 --rc genhtml_legend=1 00:05:53.494 --rc geninfo_all_blocks=1 00:05:53.494 --rc geninfo_unexecuted_blocks=1 00:05:53.494 00:05:53.494 ' 00:05:53.494 14:01:57 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:53.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.494 --rc genhtml_branch_coverage=1 00:05:53.494 --rc genhtml_function_coverage=1 00:05:53.494 --rc genhtml_legend=1 00:05:53.494 --rc geninfo_all_blocks=1 00:05:53.494 --rc geninfo_unexecuted_blocks=1 00:05:53.494 00:05:53.494 ' 00:05:53.494 14:01:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:53.494 14:01:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1463544 00:05:53.494 14:01:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.494 14:01:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1463544 00:05:53.494 14:01:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:53.494 14:01:57 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1463544 ']' 00:05:53.494 14:01:57 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.494 14:01:57 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.494 14:01:57 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.494 14:01:57 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.494 14:01:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.755 [2024-10-13 14:01:57.205120] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:05:53.755 [2024-10-13 14:01:57.205194] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463544 ] 00:05:53.755 [2024-10-13 14:01:57.340774] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:53.755 [2024-10-13 14:01:57.390906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.755 [2024-10-13 14:01:57.423895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.755 [2024-10-13 14:01:57.424059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.755 [2024-10-13 14:01:57.424218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.755 [2024-10-13 14:01:57.424328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.328 14:01:58 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.328 14:01:58 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:54.328 14:01:58 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:54.328 14:01:58 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.328 14:01:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.328 [2024-10-13 14:01:58.025038] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:54.328 [2024-10-13 14:01:58.025055] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:54.328 [2024-10-13 14:01:58.025070] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:54.328 [2024-10-13 14:01:58.025077] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:54.328 [2024-10-13 14:01:58.025082] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:54.328 14:01:58 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.328 14:01:58 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:54.328 14:01:58 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.328 14:01:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.638 [2024-10-13 14:01:58.084589] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:54.638 14:01:58 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.638 14:01:58 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:54.638 14:01:58 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.638 14:01:58 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.638 14:01:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.638 ************************************ 00:05:54.638 START TEST scheduler_create_thread 00:05:54.638 ************************************ 00:05:54.638 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:54.638 14:01:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:54.638 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.638 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.639 2 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.639 3 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.639 4 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.639 5 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.639 6 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.639 7 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.639 8 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.639 9 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.639 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.957 10 00:05:54.957 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.217 14:01:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:55.217 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.217 14:01:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.603 14:01:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.603 14:01:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:56.603 14:01:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:56.603 14:01:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.603 14:01:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.174 14:02:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.174 14:02:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:57.174 14:02:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.174 14:02:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.116 14:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.116 14:02:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:58.116 14:02:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:58.116 14:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.116 14:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.686 14:02:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.686 00:05:58.686 real 0m4.213s 00:05:58.686 user 0m0.026s 00:05:58.686 sys 0m0.006s 00:05:58.686 14:02:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.686 14:02:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.686 ************************************ 00:05:58.686 END TEST scheduler_create_thread 00:05:58.686 ************************************ 00:05:58.686 14:02:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:58.686 14:02:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1463544 00:05:58.686 14:02:02 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1463544 ']' 00:05:58.686 14:02:02 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1463544 00:05:58.686 14:02:02 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:58.686 14:02:02 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.686 14:02:02 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1463544 00:05:58.947 14:02:02 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:58.947 14:02:02 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:58.947 14:02:02 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1463544' 00:05:58.947 killing process with pid 1463544 00:05:58.947 14:02:02 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1463544 00:05:58.947 14:02:02 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1463544 00:05:58.947 [2024-10-13 14:02:02.615568] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:59.208 00:05:59.208 real 0m5.814s 00:05:59.208 user 0m12.555s 00:05:59.208 sys 0m0.433s 00:05:59.208 14:02:02 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.208 14:02:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.208 ************************************ 00:05:59.208 END TEST event_scheduler 00:05:59.208 ************************************ 00:05:59.208 14:02:02 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:59.208 14:02:02 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:59.208 14:02:02 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.208 14:02:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.208 14:02:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.208 ************************************ 00:05:59.208 START TEST app_repeat 00:05:59.208 ************************************ 00:05:59.208 14:02:02 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:59.208 14:02:02 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.208 14:02:02 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.208 14:02:02 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:59.208 14:02:02 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.208 14:02:02 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:59.208 14:02:02 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:59.208 14:02:02 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:59.208 14:02:02 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1464717 00:05:59.208 14:02:02 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.208 14:02:02 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:59.208 14:02:02 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1464717' 00:05:59.208 Process app_repeat pid: 1464717 00:05:59.208 14:02:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.208 14:02:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:59.208 spdk_app_start Round 0 00:05:59.208 14:02:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1464717 /var/tmp/spdk-nbd.sock 00:05:59.208 14:02:02 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1464717 ']' 00:05:59.208 14:02:02 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.208 14:02:02 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.209 14:02:02 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.209 14:02:02 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.209 14:02:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.209 [2024-10-13 14:02:02.890210] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:05:59.209 [2024-10-13 14:02:02.890281] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464717 ] 00:05:59.469 [2024-10-13 14:02:03.024329] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:59.469 [2024-10-13 14:02:03.073668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.469 [2024-10-13 14:02:03.099414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.469 [2024-10-13 14:02:03.099416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.040 14:02:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.040 14:02:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:00.040 14:02:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.300 Malloc0 00:06:00.300 14:02:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.560 Malloc1 00:06:00.560 14:02:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.560 14:02:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.560 14:02:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.560 14:02:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.560 14:02:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.560 14:02:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.560 14:02:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.560 14:02:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.561 14:02:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.561 14:02:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.561 14:02:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.561 14:02:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.561 14:02:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.561 14:02:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.561 14:02:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.561 14:02:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.822 /dev/nbd0 00:06:00.822 14:02:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.822 14:02:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.822 14:02:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:00.822 14:02:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:00.822 14:02:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:00.822 14:02:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:00.822 14:02:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:00.822 14:02:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:00.822 14:02:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:00.822 14:02:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:00.822 14:02:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.822 1+0 records in 00:06:00.822 1+0 records out 00:06:00.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286491 s, 14.3 MB/s 00:06:00.822 14:02:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.822 14:02:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:00.822 14:02:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.822 14:02:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:00.822 14:02:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:00.822 14:02:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.822 14:02:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.822 14:02:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.084 /dev/nbd1 00:06:01.084 14:02:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.084 14:02:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.084 14:02:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:01.084 14:02:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:01.084 14:02:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:01.084 14:02:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:01.084 14:02:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:01.084 14:02:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:01.084 14:02:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:01.084 14:02:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:01.084 14:02:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.084 1+0 records in 00:06:01.084 1+0 records out 00:06:01.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283253 s, 14.5 MB/s 00:06:01.084 14:02:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.084 14:02:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:01.084 14:02:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.084 14:02:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:01.084 14:02:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:01.084 14:02:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.084 14:02:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.084 14:02:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.084 14:02:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.084 14:02:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.084 14:02:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.084 { 00:06:01.084 "nbd_device": "/dev/nbd0", 00:06:01.084 "bdev_name": "Malloc0" 00:06:01.084 }, 00:06:01.084 { 00:06:01.084 "nbd_device": "/dev/nbd1", 00:06:01.084 "bdev_name": "Malloc1" 00:06:01.084 } 00:06:01.084 ]' 00:06:01.084 14:02:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.084 { 00:06:01.084 "nbd_device": "/dev/nbd0", 00:06:01.084 "bdev_name": "Malloc0" 00:06:01.084 }, 00:06:01.084 { 00:06:01.084 "nbd_device": "/dev/nbd1", 00:06:01.084 "bdev_name": "Malloc1" 00:06:01.084 } 00:06:01.084 ]' 00:06:01.084 14:02:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.344 /dev/nbd1' 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.344 /dev/nbd1' 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.344 256+0 records in 00:06:01.344 256+0 records out 00:06:01.344 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127788 s, 82.1 MB/s 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.344 256+0 records in 00:06:01.344 256+0 records out 00:06:01.344 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118718 s, 88.3 MB/s 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.344 256+0 records in 00:06:01.344 256+0 records out 00:06:01.344 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130704 s, 80.2 MB/s 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.344 14:02:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.605 14:02:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.865 14:02:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.865 14:02:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.865 14:02:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.865 14:02:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.865 14:02:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.865 14:02:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.865 14:02:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.865 14:02:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.865 14:02:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.865 14:02:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.865 14:02:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.865 14:02:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.865 14:02:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.126 14:02:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.126 [2024-10-13 14:02:05.760367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.126 [2024-10-13 14:02:05.776330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.126 [2024-10-13 14:02:05.776423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.126 [2024-10-13 14:02:05.805647] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.126 [2024-10-13 14:02:05.805680] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.426 14:02:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.426 14:02:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:05.426 spdk_app_start Round 1 00:06:05.426 14:02:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1464717 /var/tmp/spdk-nbd.sock 00:06:05.426 14:02:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1464717 ']' 00:06:05.426 14:02:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.426 14:02:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.426 14:02:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.426 14:02:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.426 14:02:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.426 14:02:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.426 14:02:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:05.426 14:02:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.426 Malloc0 00:06:05.426 14:02:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.686 Malloc1 00:06:05.686 14:02:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.686 14:02:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.686 14:02:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.686 14:02:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.686 14:02:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.686 14:02:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.686 14:02:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.686 14:02:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.686 14:02:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.686 14:02:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.686 14:02:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.686 14:02:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.686 14:02:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.686 14:02:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.686 14:02:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.686 14:02:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.946 /dev/nbd0 00:06:05.946 14:02:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.946 14:02:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.946 14:02:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:05.946 14:02:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:05.946 14:02:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:05.946 14:02:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:05.946 14:02:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:05.946 14:02:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:05.946 14:02:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:05.946 14:02:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:05.946 14:02:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.946 1+0 records in 00:06:05.946 1+0 records out 00:06:05.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277429 s, 14.8 MB/s 00:06:05.946 14:02:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.946 14:02:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:05.946 14:02:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.946 14:02:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:05.946 14:02:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:05.946 14:02:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.946 14:02:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.946 14:02:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.207 /dev/nbd1 00:06:06.207 14:02:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.207 14:02:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.207 14:02:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:06.207 14:02:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:06.207 14:02:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:06.207 14:02:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:06.207 14:02:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:06.207 14:02:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:06.207 14:02:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:06.207 14:02:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:06.207 14:02:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.207 1+0 records in 00:06:06.207 1+0 records out 00:06:06.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269937 s, 15.2 MB/s 00:06:06.207 14:02:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.207 14:02:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:06.207 14:02:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.207 14:02:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:06.207 14:02:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:06.207 14:02:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.207 14:02:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.207 14:02:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.207 14:02:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.207 14:02:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.207 14:02:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.207 { 00:06:06.207 "nbd_device": "/dev/nbd0", 00:06:06.207 "bdev_name": "Malloc0" 00:06:06.207 }, 00:06:06.207 { 00:06:06.207 "nbd_device": "/dev/nbd1", 00:06:06.207 "bdev_name": "Malloc1" 00:06:06.207 } 00:06:06.207 ]' 00:06:06.207 14:02:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.207 { 00:06:06.207 "nbd_device": "/dev/nbd0", 00:06:06.207 "bdev_name": "Malloc0" 00:06:06.207 }, 00:06:06.207 { 00:06:06.207 "nbd_device": "/dev/nbd1", 00:06:06.207 "bdev_name": "Malloc1" 00:06:06.207 } 00:06:06.207 ]' 00:06:06.207 14:02:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.467 14:02:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.467 /dev/nbd1' 00:06:06.467 14:02:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.467 /dev/nbd1' 00:06:06.467 14:02:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.467 14:02:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.467 14:02:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.467 14:02:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.467 14:02:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.467 14:02:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.467 14:02:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.467 14:02:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.467 14:02:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.468 14:02:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.468 14:02:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.468 14:02:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.468 256+0 records in 00:06:06.468 256+0 records out 00:06:06.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127201 s, 82.4 MB/s 00:06:06.468 14:02:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.468 14:02:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.468 256+0 records in 00:06:06.468 256+0 records out 00:06:06.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122752 s, 85.4 MB/s 00:06:06.468 14:02:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.468 14:02:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.468 256+0 records in 00:06:06.468 256+0 records out 00:06:06.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145066 s, 72.3 MB/s 00:06:06.468 14:02:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.468 14:02:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.468 14:02:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.468 14:02:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.468 14:02:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.468 14:02:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.468 14:02:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.468 14:02:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.468 14:02:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.468 14:02:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.468 14:02:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.468 14:02:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.468 14:02:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.468 14:02:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.468 14:02:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.468 14:02:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.468 14:02:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.468 14:02:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.468 14:02:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.728 14:02:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.728 14:02:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.728 14:02:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.728 14:02:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.728 14:02:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.728 14:02:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.728 14:02:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.728 14:02:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.728 14:02:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.728 14:02:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.728 14:02:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.728 14:02:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.728 14:02:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.728 14:02:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.728 14:02:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.728 14:02:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.728 14:02:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.728 14:02:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.729 14:02:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.729 14:02:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.729 14:02:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.989 14:02:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.989 14:02:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.989 14:02:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.989 14:02:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.989 14:02:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.989 14:02:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.989 14:02:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.989 14:02:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.989 14:02:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.989 14:02:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.989 14:02:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.989 14:02:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.989 14:02:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.249 14:02:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.249 [2024-10-13 14:02:10.925084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.249 [2024-10-13 14:02:10.941056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.249 [2024-10-13 14:02:10.941057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.509 [2024-10-13 14:02:10.970865] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.509 [2024-10-13 14:02:10.970896] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.807 14:02:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.807 14:02:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:10.807 spdk_app_start Round 2 00:06:10.808 14:02:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1464717 /var/tmp/spdk-nbd.sock 00:06:10.808 14:02:13 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1464717 ']' 00:06:10.808 14:02:13 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.808 14:02:13 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.808 14:02:13 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.808 14:02:13 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.808 14:02:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.808 14:02:14 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.808 14:02:14 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:10.808 14:02:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.808 Malloc0 00:06:10.808 14:02:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.808 Malloc1 00:06:10.808 14:02:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.808 14:02:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.808 14:02:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.808 14:02:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.808 14:02:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.808 14:02:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.808 14:02:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.808 14:02:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.808 14:02:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.808 14:02:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.808 14:02:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.808 14:02:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.808 14:02:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.808 14:02:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.808 14:02:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.808 14:02:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.069 /dev/nbd0 00:06:11.069 14:02:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.069 14:02:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.069 14:02:14 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:11.069 14:02:14 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:11.069 14:02:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:11.069 14:02:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:11.069 14:02:14 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:11.069 14:02:14 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:11.069 14:02:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:11.069 14:02:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:11.069 14:02:14 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.069 1+0 records in 00:06:11.069 1+0 records out 00:06:11.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270619 s, 15.1 MB/s 00:06:11.069 14:02:14 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.069 14:02:14 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:11.069 14:02:14 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.069 14:02:14 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:11.069 14:02:14 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:11.069 14:02:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.069 14:02:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.069 14:02:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.330 /dev/nbd1 00:06:11.330 14:02:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.330 14:02:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.330 14:02:14 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:11.330 14:02:14 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:11.330 14:02:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:11.330 14:02:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:11.330 14:02:14 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:11.330 14:02:14 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:11.330 14:02:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:11.330 14:02:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:11.330 14:02:14 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.330 1+0 records in 00:06:11.330 1+0 records out 00:06:11.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272759 s, 15.0 MB/s 00:06:11.330 14:02:14 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.330 14:02:14 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:11.330 14:02:14 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.330 14:02:14 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:11.330 14:02:14 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:11.330 14:02:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.330 14:02:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.330 14:02:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.330 14:02:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.330 14:02:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.590 14:02:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.590 { 00:06:11.590 "nbd_device": "/dev/nbd0", 00:06:11.590 "bdev_name": "Malloc0" 00:06:11.590 }, 00:06:11.590 { 00:06:11.590 "nbd_device": "/dev/nbd1", 00:06:11.590 "bdev_name": "Malloc1" 00:06:11.590 } 00:06:11.590 ]' 00:06:11.590 14:02:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.590 { 00:06:11.590 "nbd_device": "/dev/nbd0", 00:06:11.591 "bdev_name": "Malloc0" 00:06:11.591 }, 00:06:11.591 { 00:06:11.591 "nbd_device": "/dev/nbd1", 00:06:11.591 "bdev_name": "Malloc1" 00:06:11.591 } 00:06:11.591 ]' 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.591 /dev/nbd1' 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.591 /dev/nbd1' 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.591 256+0 records in 00:06:11.591 256+0 records out 00:06:11.591 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118064 s, 88.8 MB/s 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.591 256+0 records in 00:06:11.591 256+0 records out 00:06:11.591 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124906 s, 83.9 MB/s 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.591 256+0 records in 00:06:11.591 256+0 records out 00:06:11.591 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133418 s, 78.6 MB/s 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.591 14:02:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.851 14:02:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.851 14:02:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.851 14:02:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.851 14:02:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.851 14:02:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.851 14:02:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.851 14:02:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.851 14:02:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.851 14:02:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.851 14:02:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.112 14:02:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.112 14:02:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.112 14:02:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.112 14:02:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.112 14:02:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.112 14:02:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.112 14:02:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.112 14:02:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.112 14:02:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.112 14:02:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.112 14:02:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.112 14:02:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.112 14:02:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.112 14:02:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.112 14:02:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.112 14:02:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.112 14:02:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.372 14:02:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:12.372 14:02:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.372 14:02:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.372 14:02:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.372 14:02:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.372 14:02:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.372 14:02:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.372 14:02:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.632 [2024-10-13 14:02:16.087800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.632 [2024-10-13 14:02:16.103618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.632 [2024-10-13 14:02:16.103619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.632 [2024-10-13 14:02:16.132728] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.632 [2024-10-13 14:02:16.132758] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.937 14:02:19 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1464717 /var/tmp/spdk-nbd.sock 00:06:15.937 14:02:19 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1464717 ']' 00:06:15.937 14:02:19 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.937 14:02:19 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.937 14:02:19 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.937 14:02:19 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.937 14:02:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.937 14:02:19 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.937 14:02:19 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:15.937 14:02:19 event.app_repeat -- event/event.sh@39 -- # killprocess 1464717 00:06:15.937 14:02:19 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1464717 ']' 00:06:15.937 14:02:19 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1464717 00:06:15.937 14:02:19 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:15.937 14:02:19 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.937 14:02:19 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1464717 00:06:15.937 14:02:19 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.937 14:02:19 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.937 14:02:19 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1464717' 00:06:15.937 killing process with pid 1464717 00:06:15.938 14:02:19 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1464717 00:06:15.938 14:02:19 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1464717 00:06:15.938 spdk_app_start is called in Round 0. 00:06:15.938 Shutdown signal received, stop current app iteration 00:06:15.938 Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 reinitialization... 00:06:15.938 spdk_app_start is called in Round 1. 00:06:15.938 Shutdown signal received, stop current app iteration 00:06:15.938 Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 reinitialization... 00:06:15.938 spdk_app_start is called in Round 2. 00:06:15.938 Shutdown signal received, stop current app iteration 00:06:15.938 Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 reinitialization... 00:06:15.938 spdk_app_start is called in Round 3. 00:06:15.938 Shutdown signal received, stop current app iteration 00:06:15.938 14:02:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:15.938 14:02:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:15.938 00:06:15.938 real 0m16.480s 00:06:15.938 user 0m36.051s 00:06:15.938 sys 0m2.340s 00:06:15.938 14:02:19 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.938 14:02:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.938 ************************************ 00:06:15.938 END TEST app_repeat 00:06:15.938 ************************************ 00:06:15.938 14:02:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:15.938 14:02:19 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:15.938 14:02:19 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.938 14:02:19 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.938 14:02:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.938 ************************************ 00:06:15.938 START TEST cpu_locks 00:06:15.938 ************************************ 00:06:15.938 14:02:19 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:15.938 * Looking for test storage... 00:06:15.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:15.938 14:02:19 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:15.938 14:02:19 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:15.938 14:02:19 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:15.938 14:02:19 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.938 14:02:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:15.938 14:02:19 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.938 14:02:19 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:15.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.938 --rc genhtml_branch_coverage=1 00:06:15.938 --rc genhtml_function_coverage=1 00:06:15.938 --rc genhtml_legend=1 00:06:15.938 --rc geninfo_all_blocks=1 00:06:15.938 --rc geninfo_unexecuted_blocks=1 00:06:15.938 00:06:15.938 ' 00:06:15.938 14:02:19 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:15.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.938 --rc genhtml_branch_coverage=1 00:06:15.938 --rc genhtml_function_coverage=1 00:06:15.938 --rc genhtml_legend=1 00:06:15.938 --rc geninfo_all_blocks=1 00:06:15.938 --rc geninfo_unexecuted_blocks=1 00:06:15.938 00:06:15.938 ' 00:06:15.938 14:02:19 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:15.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.938 --rc genhtml_branch_coverage=1 00:06:15.938 --rc genhtml_function_coverage=1 00:06:15.938 --rc genhtml_legend=1 00:06:15.938 --rc geninfo_all_blocks=1 00:06:15.938 --rc geninfo_unexecuted_blocks=1 00:06:15.938 00:06:15.938 ' 00:06:15.938 14:02:19 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:15.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.938 --rc genhtml_branch_coverage=1 00:06:15.938 --rc genhtml_function_coverage=1 00:06:15.938 --rc genhtml_legend=1 00:06:15.938 --rc geninfo_all_blocks=1 00:06:15.938 --rc geninfo_unexecuted_blocks=1 00:06:15.938 00:06:15.938 ' 00:06:15.938 14:02:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:15.938 14:02:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:15.938 14:02:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:15.938 14:02:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:15.938 14:02:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.938 14:02:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.938 14:02:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.938 ************************************ 00:06:15.938 START TEST default_locks 00:06:15.938 ************************************ 00:06:15.938 14:02:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:15.938 14:02:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1468237 00:06:15.938 14:02:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1468237 00:06:15.938 14:02:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.938 14:02:19 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1468237 ']' 00:06:15.938 14:02:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.938 14:02:19 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.938 14:02:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.938 14:02:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.938 14:02:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.199 [2024-10-13 14:02:19.700239] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:06:16.199 [2024-10-13 14:02:19.700300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468237 ] 00:06:16.199 [2024-10-13 14:02:19.836032] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:16.199 [2024-10-13 14:02:19.883826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.459 [2024-10-13 14:02:19.908028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.029 14:02:20 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.029 14:02:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:17.029 14:02:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1468237 00:06:17.029 14:02:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.029 14:02:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1468237 00:06:17.289 lslocks: write error 00:06:17.289 14:02:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1468237 00:06:17.289 14:02:20 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1468237 ']' 00:06:17.289 14:02:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1468237 00:06:17.289 14:02:20 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:17.289 14:02:20 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.289 14:02:20 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1468237 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1468237' 00:06:17.549 killing process with pid 1468237 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1468237 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1468237 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1468237 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1468237 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1468237 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1468237 ']' 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1468237) - No such process 00:06:17.549 ERROR: process (pid: 1468237) is no longer running 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:17.549 00:06:17.549 real 0m1.580s 00:06:17.549 user 0m1.564s 00:06:17.549 sys 0m0.596s 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.549 14:02:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.549 ************************************ 00:06:17.549 END TEST default_locks 00:06:17.549 ************************************ 00:06:17.810 14:02:21 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:17.810 14:02:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.810 14:02:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.810 14:02:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.810 ************************************ 00:06:17.810 START TEST default_locks_via_rpc 00:06:17.810 ************************************ 00:06:17.810 14:02:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:17.810 14:02:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1468595 00:06:17.810 14:02:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1468595 00:06:17.810 14:02:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.810 14:02:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1468595 ']' 00:06:17.810 14:02:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.810 14:02:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.810 14:02:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.810 14:02:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.810 14:02:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.810 [2024-10-13 14:02:21.350566] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:06:17.810 [2024-10-13 14:02:21.350620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468595 ] 00:06:17.810 [2024-10-13 14:02:21.484574] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:18.071 [2024-10-13 14:02:21.530299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.071 [2024-10-13 14:02:21.547956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.642 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.642 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:18.642 14:02:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:18.642 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.642 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.642 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.642 14:02:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:18.642 14:02:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:18.642 14:02:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:18.642 14:02:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:18.642 14:02:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:18.642 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.642 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.642 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.642 14:02:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1468595 00:06:18.642 14:02:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1468595 00:06:18.642 14:02:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.213 14:02:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1468595 00:06:19.213 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1468595 ']' 00:06:19.213 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1468595 00:06:19.213 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:19.213 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.213 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1468595 00:06:19.213 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.213 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.213 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1468595' 00:06:19.213 killing process with pid 1468595 00:06:19.213 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1468595 00:06:19.213 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1468595 00:06:19.213 00:06:19.213 real 0m1.620s 00:06:19.213 user 0m1.625s 00:06:19.213 sys 0m0.581s 00:06:19.213 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.213 14:02:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.213 ************************************ 00:06:19.213 END TEST default_locks_via_rpc 00:06:19.213 ************************************ 00:06:19.475 14:02:22 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:19.475 14:02:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.475 14:02:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.475 14:02:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.475 ************************************ 00:06:19.475 START TEST non_locking_app_on_locked_coremask 00:06:19.475 ************************************ 00:06:19.475 14:02:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:19.475 14:02:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1468961 00:06:19.475 14:02:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1468961 /var/tmp/spdk.sock 00:06:19.475 14:02:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.475 14:02:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1468961 ']' 00:06:19.475 14:02:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.475 14:02:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.475 14:02:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.475 14:02:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.475 14:02:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.475 [2024-10-13 14:02:23.047184] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:06:19.475 [2024-10-13 14:02:23.047241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468961 ] 00:06:19.475 [2024-10-13 14:02:23.180788] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:19.736 [2024-10-13 14:02:23.228933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.736 [2024-10-13 14:02:23.253515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.307 14:02:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.307 14:02:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:20.307 14:02:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1469288 00:06:20.307 14:02:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1469288 /var/tmp/spdk2.sock 00:06:20.307 14:02:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:20.307 14:02:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1469288 ']' 00:06:20.307 14:02:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.307 14:02:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.307 14:02:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.307 14:02:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.307 14:02:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.307 [2024-10-13 14:02:23.894401] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:06:20.307 [2024-10-13 14:02:23.894452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469288 ] 00:06:20.568 [2024-10-13 14:02:24.026423] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:20.568 [2024-10-13 14:02:24.067958] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.568 [2024-10-13 14:02:24.067976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.568 [2024-10-13 14:02:24.100382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.137 14:02:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.137 14:02:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:21.137 14:02:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1468961 00:06:21.137 14:02:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.137 14:02:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1468961 00:06:21.709 lslocks: write error 00:06:21.709 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1468961 00:06:21.710 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1468961 ']' 00:06:21.710 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1468961 00:06:21.710 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:21.710 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.710 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1468961 00:06:21.710 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.710 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.710 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1468961' 00:06:21.710 killing process with pid 1468961 00:06:21.710 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1468961 00:06:21.710 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1468961 00:06:22.281 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1469288 00:06:22.281 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1469288 ']' 00:06:22.281 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1469288 00:06:22.281 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:22.281 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.281 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1469288 00:06:22.281 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.281 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.281 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1469288' 00:06:22.281 killing process with pid 1469288 00:06:22.281 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1469288 00:06:22.281 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1469288 00:06:22.281 00:06:22.281 real 0m2.975s 00:06:22.281 user 0m3.226s 00:06:22.281 sys 0m0.907s 00:06:22.281 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.281 14:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.281 ************************************ 00:06:22.281 END TEST non_locking_app_on_locked_coremask 00:06:22.281 ************************************ 00:06:22.542 14:02:26 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:22.542 14:02:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.542 14:02:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.542 14:02:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.542 ************************************ 00:06:22.542 START TEST locking_app_on_unlocked_coremask 00:06:22.542 ************************************ 00:06:22.542 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:22.542 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1469671 00:06:22.542 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1469671 /var/tmp/spdk.sock 00:06:22.542 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:22.542 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1469671 ']' 00:06:22.542 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.542 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.542 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.542 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.542 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.542 [2024-10-13 14:02:26.094114] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:06:22.542 [2024-10-13 14:02:26.094171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469671 ] 00:06:22.542 [2024-10-13 14:02:26.227813] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:22.803 [2024-10-13 14:02:26.273473] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.803 [2024-10-13 14:02:26.273502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.803 [2024-10-13 14:02:26.298582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.374 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.374 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:23.374 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:23.374 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1469878 00:06:23.374 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1469878 /var/tmp/spdk2.sock 00:06:23.374 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1469878 ']' 00:06:23.374 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.374 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.374 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.374 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.374 14:02:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.374 [2024-10-13 14:02:26.922494] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:06:23.374 [2024-10-13 14:02:26.922554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469878 ] 00:06:23.374 [2024-10-13 14:02:27.058439] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:23.635 [2024-10-13 14:02:27.099512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.635 [2024-10-13 14:02:27.133052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.205 14:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.205 14:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:24.205 14:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1469878 00:06:24.205 14:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1469878 00:06:24.205 14:02:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.776 lslocks: write error 00:06:24.776 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1469671 00:06:24.776 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1469671 ']' 00:06:24.776 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1469671 00:06:24.776 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:24.776 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.776 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1469671 00:06:24.776 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.776 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.776 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1469671' 00:06:24.776 killing process with pid 1469671 00:06:24.776 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1469671 00:06:24.776 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1469671 00:06:25.036 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1469878 00:06:25.036 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1469878 ']' 00:06:25.036 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1469878 00:06:25.036 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:25.036 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.036 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1469878 00:06:25.297 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.297 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.297 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1469878' 00:06:25.297 killing process with pid 1469878 00:06:25.297 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1469878 00:06:25.297 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1469878 00:06:25.297 00:06:25.297 real 0m2.927s 00:06:25.297 user 0m3.127s 00:06:25.297 sys 0m0.938s 00:06:25.297 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.297 14:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.297 ************************************ 00:06:25.297 END TEST locking_app_on_unlocked_coremask 00:06:25.297 ************************************ 00:06:25.297 14:02:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:25.297 14:02:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.297 14:02:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.297 14:02:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.559 ************************************ 00:06:25.559 START TEST locking_app_on_locked_coremask 00:06:25.559 ************************************ 00:06:25.559 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:25.559 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1470381 00:06:25.559 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1470381 /var/tmp/spdk.sock 00:06:25.559 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.559 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1470381 ']' 00:06:25.559 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.559 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.559 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.559 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.559 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.559 [2024-10-13 14:02:29.094682] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:06:25.559 [2024-10-13 14:02:29.094733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470381 ] 00:06:25.559 [2024-10-13 14:02:29.227178] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:25.820 [2024-10-13 14:02:29.272985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.820 [2024-10-13 14:02:29.296819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.391 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.391 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:26.391 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1470405 00:06:26.391 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1470405 /var/tmp/spdk2.sock 00:06:26.391 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:26.391 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:26.391 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1470405 /var/tmp/spdk2.sock 00:06:26.391 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:26.391 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.391 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:26.391 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.391 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1470405 /var/tmp/spdk2.sock 00:06:26.391 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1470405 ']' 00:06:26.391 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.391 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.391 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.391 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.391 14:02:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.391 [2024-10-13 14:02:29.938312] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:06:26.391 [2024-10-13 14:02:29.938365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470405 ] 00:06:26.391 [2024-10-13 14:02:30.072441] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:26.651 [2024-10-13 14:02:30.115165] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1470381 has claimed it. 00:06:26.651 [2024-10-13 14:02:30.115196] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:26.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1470405) - No such process 00:06:26.912 ERROR: process (pid: 1470405) is no longer running 00:06:26.912 14:02:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.912 14:02:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:26.912 14:02:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:26.912 14:02:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.912 14:02:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:26.912 14:02:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.912 14:02:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1470381 00:06:26.912 14:02:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1470381 00:06:26.912 14:02:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.485 lslocks: write error 00:06:27.485 14:02:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1470381 00:06:27.485 14:02:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1470381 ']' 00:06:27.485 14:02:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1470381 00:06:27.485 14:02:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:27.485 14:02:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.485 14:02:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1470381 00:06:27.485 14:02:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.485 14:02:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.485 14:02:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1470381' 00:06:27.485 killing process with pid 1470381 00:06:27.485 14:02:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1470381 00:06:27.485 14:02:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1470381 00:06:27.745 00:06:27.745 real 0m2.210s 00:06:27.745 user 0m2.410s 00:06:27.745 sys 0m0.624s 00:06:27.745 14:02:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.745 14:02:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.745 ************************************ 00:06:27.745 END TEST locking_app_on_locked_coremask 00:06:27.745 ************************************ 00:06:27.745 14:02:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:27.745 14:02:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.745 14:02:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.745 14:02:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.745 ************************************ 00:06:27.745 START TEST locking_overlapped_coremask 00:06:27.745 ************************************ 00:06:27.745 14:02:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:27.745 14:02:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1470762 00:06:27.745 14:02:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1470762 /var/tmp/spdk.sock 00:06:27.746 14:02:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:27.746 14:02:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1470762 ']' 00:06:27.746 14:02:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.746 14:02:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.746 14:02:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.746 14:02:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.746 14:02:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.746 [2024-10-13 14:02:31.390028] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:06:27.746 [2024-10-13 14:02:31.390086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470762 ] 00:06:28.006 [2024-10-13 14:02:31.521644] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:28.006 [2024-10-13 14:02:31.568735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.006 [2024-10-13 14:02:31.587315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.007 [2024-10-13 14:02:31.587472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.007 [2024-10-13 14:02:31.587474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.577 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.577 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:28.577 14:02:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1471069 00:06:28.577 14:02:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1471069 /var/tmp/spdk2.sock 00:06:28.577 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:28.577 14:02:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:28.577 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1471069 /var/tmp/spdk2.sock 00:06:28.577 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:28.577 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.577 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:28.577 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.577 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1471069 /var/tmp/spdk2.sock 00:06:28.577 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1471069 ']' 00:06:28.577 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.577 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.577 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.577 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.577 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.577 [2024-10-13 14:02:32.230948] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:06:28.577 [2024-10-13 14:02:32.231005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471069 ] 00:06:28.837 [2024-10-13 14:02:32.363806] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:28.837 [2024-10-13 14:02:32.426612] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1470762 has claimed it. 00:06:28.837 [2024-10-13 14:02:32.426644] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:29.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1471069) - No such process 00:06:29.167 ERROR: process (pid: 1471069) is no longer running 00:06:29.167 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.167 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:29.167 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:29.167 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.167 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:29.167 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.167 14:02:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:29.167 14:02:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:29.167 14:02:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:29.167 14:02:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:29.167 14:02:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1470762 00:06:29.167 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1470762 ']' 00:06:29.167 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1470762 00:06:29.167 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:29.167 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.167 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1470762 00:06:29.471 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.471 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.471 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1470762' 00:06:29.471 killing process with pid 1470762 00:06:29.471 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1470762 00:06:29.471 14:02:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1470762 00:06:29.471 00:06:29.471 real 0m1.765s 00:06:29.471 user 0m4.832s 00:06:29.471 sys 0m0.406s 00:06:29.471 14:02:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.471 14:02:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.471 ************************************ 00:06:29.471 END TEST locking_overlapped_coremask 00:06:29.471 ************************************ 00:06:29.471 14:02:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:29.472 14:02:33 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.472 14:02:33 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.472 14:02:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.472 ************************************ 00:06:29.472 START TEST locking_overlapped_coremask_via_rpc 00:06:29.472 ************************************ 00:06:29.472 14:02:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:29.472 14:02:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1471135 00:06:29.472 14:02:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1471135 /var/tmp/spdk.sock 00:06:29.472 14:02:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:29.472 14:02:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1471135 ']' 00:06:29.472 14:02:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.472 14:02:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.472 14:02:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.472 14:02:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.472 14:02:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.733 [2024-10-13 14:02:33.224377] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:06:29.733 [2024-10-13 14:02:33.224425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471135 ] 00:06:29.733 [2024-10-13 14:02:33.355284] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:29.733 [2024-10-13 14:02:33.386843] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.733 [2024-10-13 14:02:33.386863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.733 [2024-10-13 14:02:33.404838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.733 [2024-10-13 14:02:33.404986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.733 [2024-10-13 14:02:33.404988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.673 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.673 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:30.673 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1471471 00:06:30.673 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1471471 /var/tmp/spdk2.sock 00:06:30.673 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1471471 ']' 00:06:30.673 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:30.673 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.673 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.673 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.673 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.673 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.673 [2024-10-13 14:02:34.078674] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:06:30.673 [2024-10-13 14:02:34.078729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471471 ] 00:06:30.673 [2024-10-13 14:02:34.212618] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:30.673 [2024-10-13 14:02:34.275387] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.673 [2024-10-13 14:02:34.275408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.674 [2024-10-13 14:02:34.315558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.674 [2024-10-13 14:02:34.315713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.674 [2024-10-13 14:02:34.315715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:31.244 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.244 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:31.244 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:31.244 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.244 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.244 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.244 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.244 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:31.244 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.245 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:31.245 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.245 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:31.245 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.245 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.245 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.245 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.245 [2024-10-13 14:02:34.881146] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1471135 has claimed it. 00:06:31.245 request: 00:06:31.245 { 00:06:31.245 "method": "framework_enable_cpumask_locks", 00:06:31.245 "req_id": 1 00:06:31.245 } 00:06:31.245 Got JSON-RPC error response 00:06:31.245 response: 00:06:31.245 { 00:06:31.245 "code": -32603, 00:06:31.245 "message": "Failed to claim CPU core: 2" 00:06:31.245 } 00:06:31.245 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:31.245 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:31.245 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:31.245 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:31.245 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:31.245 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1471135 /var/tmp/spdk.sock 00:06:31.245 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1471135 ']' 00:06:31.245 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.245 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.245 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.245 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.245 14:02:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.506 14:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.506 14:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:31.506 14:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1471471 /var/tmp/spdk2.sock 00:06:31.506 14:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1471471 ']' 00:06:31.506 14:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.506 14:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.506 14:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.506 14:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.506 14:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.766 14:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.766 14:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:31.766 14:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:31.766 14:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:31.766 14:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:31.766 14:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:31.766 00:06:31.766 real 0m2.089s 00:06:31.766 user 0m0.869s 00:06:31.766 sys 0m0.140s 00:06:31.766 14:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.766 14:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.766 ************************************ 00:06:31.766 END TEST locking_overlapped_coremask_via_rpc 00:06:31.766 ************************************ 00:06:31.766 14:02:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:31.766 14:02:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1471135 ]] 00:06:31.766 14:02:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1471135 00:06:31.766 14:02:35 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1471135 ']' 00:06:31.766 14:02:35 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1471135 00:06:31.766 14:02:35 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:31.766 14:02:35 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.766 14:02:35 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1471135 00:06:31.766 14:02:35 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.766 14:02:35 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.766 14:02:35 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1471135' 00:06:31.766 killing process with pid 1471135 00:06:31.766 14:02:35 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1471135 00:06:31.766 14:02:35 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1471135 00:06:32.026 14:02:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1471471 ]] 00:06:32.026 14:02:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1471471 00:06:32.026 14:02:35 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1471471 ']' 00:06:32.026 14:02:35 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1471471 00:06:32.026 14:02:35 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:32.026 14:02:35 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.026 14:02:35 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1471471 00:06:32.026 14:02:35 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:32.026 14:02:35 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:32.026 14:02:35 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1471471' 00:06:32.026 killing process with pid 1471471 00:06:32.026 14:02:35 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1471471 00:06:32.026 14:02:35 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1471471 00:06:32.286 14:02:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.286 14:02:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:32.286 14:02:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1471135 ]] 00:06:32.286 14:02:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1471135 00:06:32.286 14:02:35 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1471135 ']' 00:06:32.286 14:02:35 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1471135 00:06:32.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1471135) - No such process 00:06:32.287 14:02:35 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1471135 is not found' 00:06:32.287 Process with pid 1471135 is not found 00:06:32.287 14:02:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1471471 ]] 00:06:32.287 14:02:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1471471 00:06:32.287 14:02:35 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1471471 ']' 00:06:32.287 14:02:35 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1471471 00:06:32.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1471471) - No such process 00:06:32.287 14:02:35 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1471471 is not found' 00:06:32.287 Process with pid 1471471 is not found 00:06:32.287 14:02:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.287 00:06:32.287 real 0m16.411s 00:06:32.287 user 0m27.373s 00:06:32.287 sys 0m5.140s 00:06:32.287 14:02:35 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.287 14:02:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.287 ************************************ 00:06:32.287 END TEST cpu_locks 00:06:32.287 ************************************ 00:06:32.287 00:06:32.287 real 0m43.105s 00:06:32.287 user 1m22.447s 00:06:32.287 sys 0m8.567s 00:06:32.287 14:02:35 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.287 14:02:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.287 ************************************ 00:06:32.287 END TEST event 00:06:32.287 ************************************ 00:06:32.287 14:02:35 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:32.287 14:02:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.287 14:02:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.287 14:02:35 -- common/autotest_common.sh@10 -- # set +x 00:06:32.287 ************************************ 00:06:32.287 START TEST thread 00:06:32.287 ************************************ 00:06:32.287 14:02:35 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:32.547 * Looking for test storage... 00:06:32.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:32.547 14:02:36 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:32.547 14:02:36 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:32.547 14:02:36 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:32.547 14:02:36 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:32.547 14:02:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.547 14:02:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.547 14:02:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.547 14:02:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.547 14:02:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.547 14:02:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.547 14:02:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.547 14:02:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.547 14:02:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.547 14:02:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.547 14:02:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.547 14:02:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:32.547 14:02:36 thread -- scripts/common.sh@345 -- # : 1 00:06:32.547 14:02:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.547 14:02:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.547 14:02:36 thread -- scripts/common.sh@365 -- # decimal 1 00:06:32.547 14:02:36 thread -- scripts/common.sh@353 -- # local d=1 00:06:32.547 14:02:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.547 14:02:36 thread -- scripts/common.sh@355 -- # echo 1 00:06:32.547 14:02:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.547 14:02:36 thread -- scripts/common.sh@366 -- # decimal 2 00:06:32.547 14:02:36 thread -- scripts/common.sh@353 -- # local d=2 00:06:32.547 14:02:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.547 14:02:36 thread -- scripts/common.sh@355 -- # echo 2 00:06:32.547 14:02:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.547 14:02:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.547 14:02:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.547 14:02:36 thread -- scripts/common.sh@368 -- # return 0 00:06:32.547 14:02:36 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.547 14:02:36 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:32.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.547 --rc genhtml_branch_coverage=1 00:06:32.547 --rc genhtml_function_coverage=1 00:06:32.547 --rc genhtml_legend=1 00:06:32.547 --rc geninfo_all_blocks=1 00:06:32.547 --rc geninfo_unexecuted_blocks=1 00:06:32.547 00:06:32.547 ' 00:06:32.547 14:02:36 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:32.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.547 --rc genhtml_branch_coverage=1 00:06:32.548 --rc genhtml_function_coverage=1 00:06:32.548 --rc genhtml_legend=1 00:06:32.548 --rc geninfo_all_blocks=1 00:06:32.548 --rc geninfo_unexecuted_blocks=1 00:06:32.548 00:06:32.548 ' 00:06:32.548 14:02:36 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:32.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.548 --rc genhtml_branch_coverage=1 00:06:32.548 --rc genhtml_function_coverage=1 00:06:32.548 --rc genhtml_legend=1 00:06:32.548 --rc geninfo_all_blocks=1 00:06:32.548 --rc geninfo_unexecuted_blocks=1 00:06:32.548 00:06:32.548 ' 00:06:32.548 14:02:36 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:32.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.548 --rc genhtml_branch_coverage=1 00:06:32.548 --rc genhtml_function_coverage=1 00:06:32.548 --rc genhtml_legend=1 00:06:32.548 --rc geninfo_all_blocks=1 00:06:32.548 --rc geninfo_unexecuted_blocks=1 00:06:32.548 00:06:32.548 ' 00:06:32.548 14:02:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:32.548 14:02:36 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:32.548 14:02:36 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.548 14:02:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.548 ************************************ 00:06:32.548 START TEST thread_poller_perf 00:06:32.548 ************************************ 00:06:32.548 14:02:36 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:32.548 [2024-10-13 14:02:36.187541] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:06:32.548 [2024-10-13 14:02:36.187631] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471915 ] 00:06:32.808 [2024-10-13 14:02:36.324535] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:32.808 [2024-10-13 14:02:36.372599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.809 [2024-10-13 14:02:36.389858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.809 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:33.749 [2024-10-13T12:02:37.456Z] ====================================== 00:06:33.749 [2024-10-13T12:02:37.456Z] busy:2402972972 (cyc) 00:06:33.749 [2024-10-13T12:02:37.456Z] total_run_count: 418000 00:06:33.749 [2024-10-13T12:02:37.456Z] tsc_hz: 2394400000 (cyc) 00:06:33.749 [2024-10-13T12:02:37.456Z] ====================================== 00:06:33.749 [2024-10-13T12:02:37.456Z] poller_cost: 5748 (cyc), 2400 (nsec) 00:06:33.749 00:06:33.749 real 0m1.250s 00:06:33.749 user 0m1.064s 00:06:33.749 sys 0m0.081s 00:06:33.749 14:02:37 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.749 14:02:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.749 ************************************ 00:06:33.749 END TEST thread_poller_perf 00:06:33.749 ************************************ 00:06:33.749 14:02:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:33.749 14:02:37 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:33.749 14:02:37 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.749 14:02:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.009 ************************************ 00:06:34.009 START TEST thread_poller_perf 00:06:34.009 ************************************ 00:06:34.009 14:02:37 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.009 [2024-10-13 14:02:37.512735] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:06:34.009 [2024-10-13 14:02:37.512834] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472272 ] 00:06:34.009 [2024-10-13 14:02:37.647499] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:34.009 [2024-10-13 14:02:37.692917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.009 [2024-10-13 14:02:37.709574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.009 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:35.391 [2024-10-13T12:02:39.098Z] ====================================== 00:06:35.391 [2024-10-13T12:02:39.098Z] busy:2395837286 (cyc) 00:06:35.391 [2024-10-13T12:02:39.098Z] total_run_count: 5541000 00:06:35.391 [2024-10-13T12:02:39.098Z] tsc_hz: 2394400000 (cyc) 00:06:35.391 [2024-10-13T12:02:39.098Z] ====================================== 00:06:35.391 [2024-10-13T12:02:39.098Z] poller_cost: 432 (cyc), 180 (nsec) 00:06:35.391 00:06:35.391 real 0m1.237s 00:06:35.391 user 0m1.056s 00:06:35.391 sys 0m0.077s 00:06:35.391 14:02:38 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.391 14:02:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:35.391 ************************************ 00:06:35.391 END TEST thread_poller_perf 00:06:35.391 ************************************ 00:06:35.391 14:02:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:35.391 00:06:35.391 real 0m2.835s 00:06:35.391 user 0m2.293s 00:06:35.391 sys 0m0.356s 00:06:35.391 14:02:38 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.391 14:02:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.391 ************************************ 00:06:35.391 END TEST thread 00:06:35.391 ************************************ 00:06:35.391 14:02:38 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:35.391 14:02:38 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:35.391 14:02:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.391 14:02:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.391 14:02:38 -- common/autotest_common.sh@10 -- # set +x 00:06:35.391 ************************************ 00:06:35.391 START TEST app_cmdline 00:06:35.391 ************************************ 00:06:35.391 14:02:38 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:35.391 * Looking for test storage... 00:06:35.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:35.391 14:02:38 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:35.391 14:02:38 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:35.391 14:02:38 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:35.391 14:02:39 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.391 14:02:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:35.391 14:02:39 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.391 14:02:39 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:35.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.391 --rc genhtml_branch_coverage=1 00:06:35.391 --rc genhtml_function_coverage=1 00:06:35.391 --rc genhtml_legend=1 00:06:35.391 --rc geninfo_all_blocks=1 00:06:35.391 --rc geninfo_unexecuted_blocks=1 00:06:35.391 00:06:35.391 ' 00:06:35.391 14:02:39 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:35.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.391 --rc genhtml_branch_coverage=1 00:06:35.391 --rc genhtml_function_coverage=1 00:06:35.391 --rc genhtml_legend=1 00:06:35.391 --rc geninfo_all_blocks=1 00:06:35.391 --rc geninfo_unexecuted_blocks=1 00:06:35.391 00:06:35.391 ' 00:06:35.391 14:02:39 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:35.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.391 --rc genhtml_branch_coverage=1 00:06:35.391 --rc genhtml_function_coverage=1 00:06:35.391 --rc genhtml_legend=1 00:06:35.391 --rc geninfo_all_blocks=1 00:06:35.391 --rc geninfo_unexecuted_blocks=1 00:06:35.391 00:06:35.391 ' 00:06:35.391 14:02:39 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:35.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.391 --rc genhtml_branch_coverage=1 00:06:35.391 --rc genhtml_function_coverage=1 00:06:35.391 --rc genhtml_legend=1 00:06:35.391 --rc geninfo_all_blocks=1 00:06:35.391 --rc geninfo_unexecuted_blocks=1 00:06:35.391 00:06:35.391 ' 00:06:35.391 14:02:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:35.391 14:02:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1472674 00:06:35.391 14:02:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1472674 00:06:35.391 14:02:39 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1472674 ']' 00:06:35.391 14:02:39 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:35.391 14:02:39 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.391 14:02:39 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.391 14:02:39 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.391 14:02:39 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.391 14:02:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.391 [2024-10-13 14:02:39.095754] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:06:35.391 [2024-10-13 14:02:39.095811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472674 ] 00:06:35.652 [2024-10-13 14:02:39.226747] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:35.652 [2024-10-13 14:02:39.273164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.652 [2024-10-13 14:02:39.290671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.222 14:02:39 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.222 14:02:39 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:36.222 14:02:39 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:36.522 { 00:06:36.522 "version": "SPDK v25.01-pre git sha1 bbce7a874", 00:06:36.522 "fields": { 00:06:36.522 "major": 25, 00:06:36.522 "minor": 1, 00:06:36.522 "patch": 0, 00:06:36.522 "suffix": "-pre", 00:06:36.522 "commit": "bbce7a874" 00:06:36.522 } 00:06:36.522 } 00:06:36.522 14:02:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:36.522 14:02:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:36.522 14:02:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:36.522 14:02:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:36.522 14:02:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:36.522 14:02:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:36.522 14:02:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:36.522 14:02:40 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.522 14:02:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:36.522 14:02:40 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.522 14:02:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:36.522 14:02:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:36.522 14:02:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:36.522 14:02:40 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:36.522 14:02:40 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:36.522 14:02:40 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:36.522 14:02:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.522 14:02:40 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:36.522 14:02:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.522 14:02:40 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:36.522 14:02:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.522 14:02:40 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:36.522 14:02:40 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:36.522 14:02:40 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:36.783 request: 00:06:36.783 { 00:06:36.783 "method": "env_dpdk_get_mem_stats", 00:06:36.783 "req_id": 1 00:06:36.783 } 00:06:36.783 Got JSON-RPC error response 00:06:36.783 response: 00:06:36.783 { 00:06:36.783 "code": -32601, 00:06:36.783 "message": "Method not found" 00:06:36.783 } 00:06:36.783 14:02:40 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:36.783 14:02:40 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:36.783 14:02:40 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:36.783 14:02:40 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:36.783 14:02:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1472674 00:06:36.783 14:02:40 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1472674 ']' 00:06:36.783 14:02:40 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1472674 00:06:36.783 14:02:40 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:36.783 14:02:40 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.783 14:02:40 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1472674 00:06:36.783 14:02:40 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.783 14:02:40 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.783 14:02:40 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1472674' 00:06:36.783 killing process with pid 1472674 00:06:36.783 14:02:40 app_cmdline -- common/autotest_common.sh@969 -- # kill 1472674 00:06:36.783 14:02:40 app_cmdline -- common/autotest_common.sh@974 -- # wait 1472674 00:06:37.044 00:06:37.045 real 0m1.707s 00:06:37.045 user 0m1.960s 00:06:37.045 sys 0m0.455s 00:06:37.045 14:02:40 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.045 14:02:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:37.045 ************************************ 00:06:37.045 END TEST app_cmdline 00:06:37.045 ************************************ 00:06:37.045 14:02:40 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:37.045 14:02:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.045 14:02:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.045 14:02:40 -- common/autotest_common.sh@10 -- # set +x 00:06:37.045 ************************************ 00:06:37.045 START TEST version 00:06:37.045 ************************************ 00:06:37.045 14:02:40 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:37.045 * Looking for test storage... 00:06:37.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:37.045 14:02:40 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:37.045 14:02:40 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:37.045 14:02:40 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:37.306 14:02:40 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:37.306 14:02:40 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.306 14:02:40 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.306 14:02:40 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.306 14:02:40 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.306 14:02:40 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.306 14:02:40 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.306 14:02:40 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.306 14:02:40 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.306 14:02:40 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.306 14:02:40 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.306 14:02:40 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.306 14:02:40 version -- scripts/common.sh@344 -- # case "$op" in 00:06:37.306 14:02:40 version -- scripts/common.sh@345 -- # : 1 00:06:37.306 14:02:40 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.306 14:02:40 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.306 14:02:40 version -- scripts/common.sh@365 -- # decimal 1 00:06:37.306 14:02:40 version -- scripts/common.sh@353 -- # local d=1 00:06:37.306 14:02:40 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.306 14:02:40 version -- scripts/common.sh@355 -- # echo 1 00:06:37.306 14:02:40 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.306 14:02:40 version -- scripts/common.sh@366 -- # decimal 2 00:06:37.306 14:02:40 version -- scripts/common.sh@353 -- # local d=2 00:06:37.306 14:02:40 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.306 14:02:40 version -- scripts/common.sh@355 -- # echo 2 00:06:37.306 14:02:40 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.306 14:02:40 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.306 14:02:40 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.306 14:02:40 version -- scripts/common.sh@368 -- # return 0 00:06:37.306 14:02:40 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.306 14:02:40 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:37.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.306 --rc genhtml_branch_coverage=1 00:06:37.306 --rc genhtml_function_coverage=1 00:06:37.306 --rc genhtml_legend=1 00:06:37.306 --rc geninfo_all_blocks=1 00:06:37.306 --rc geninfo_unexecuted_blocks=1 00:06:37.306 00:06:37.306 ' 00:06:37.306 14:02:40 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:37.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.306 --rc genhtml_branch_coverage=1 00:06:37.306 --rc genhtml_function_coverage=1 00:06:37.306 --rc genhtml_legend=1 00:06:37.306 --rc geninfo_all_blocks=1 00:06:37.306 --rc geninfo_unexecuted_blocks=1 00:06:37.306 00:06:37.306 ' 00:06:37.306 14:02:40 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:37.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.306 --rc genhtml_branch_coverage=1 00:06:37.306 --rc genhtml_function_coverage=1 00:06:37.306 --rc genhtml_legend=1 00:06:37.306 --rc geninfo_all_blocks=1 00:06:37.306 --rc geninfo_unexecuted_blocks=1 00:06:37.306 00:06:37.306 ' 00:06:37.306 14:02:40 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:37.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.306 --rc genhtml_branch_coverage=1 00:06:37.306 --rc genhtml_function_coverage=1 00:06:37.306 --rc genhtml_legend=1 00:06:37.306 --rc geninfo_all_blocks=1 00:06:37.306 --rc geninfo_unexecuted_blocks=1 00:06:37.306 00:06:37.306 ' 00:06:37.306 14:02:40 version -- app/version.sh@17 -- # get_header_version major 00:06:37.306 14:02:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:37.306 14:02:40 version -- app/version.sh@14 -- # cut -f2 00:06:37.306 14:02:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:37.306 14:02:40 version -- app/version.sh@17 -- # major=25 00:06:37.306 14:02:40 version -- app/version.sh@18 -- # get_header_version minor 00:06:37.306 14:02:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:37.306 14:02:40 version -- app/version.sh@14 -- # cut -f2 00:06:37.306 14:02:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:37.306 14:02:40 version -- app/version.sh@18 -- # minor=1 00:06:37.306 14:02:40 version -- app/version.sh@19 -- # get_header_version patch 00:06:37.306 14:02:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:37.306 14:02:40 version -- app/version.sh@14 -- # cut -f2 00:06:37.306 14:02:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:37.306 14:02:40 version -- app/version.sh@19 -- # patch=0 00:06:37.306 14:02:40 version -- app/version.sh@20 -- # get_header_version suffix 00:06:37.306 14:02:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:37.306 14:02:40 version -- app/version.sh@14 -- # cut -f2 00:06:37.306 14:02:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:37.306 14:02:40 version -- app/version.sh@20 -- # suffix=-pre 00:06:37.306 14:02:40 version -- app/version.sh@22 -- # version=25.1 00:06:37.306 14:02:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:37.306 14:02:40 version -- app/version.sh@28 -- # version=25.1rc0 00:06:37.306 14:02:40 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:37.306 14:02:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:37.306 14:02:40 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:37.306 14:02:40 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:37.306 00:06:37.306 real 0m0.281s 00:06:37.306 user 0m0.167s 00:06:37.306 sys 0m0.162s 00:06:37.306 14:02:40 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.306 14:02:40 version -- common/autotest_common.sh@10 -- # set +x 00:06:37.306 ************************************ 00:06:37.306 END TEST version 00:06:37.306 ************************************ 00:06:37.306 14:02:40 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:37.306 14:02:40 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:37.306 14:02:40 -- spdk/autotest.sh@194 -- # uname -s 00:06:37.306 14:02:40 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:37.306 14:02:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:37.306 14:02:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:37.306 14:02:40 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:37.306 14:02:40 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:37.306 14:02:40 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:37.306 14:02:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:37.306 14:02:40 -- common/autotest_common.sh@10 -- # set +x 00:06:37.306 14:02:40 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:37.306 14:02:40 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:37.306 14:02:40 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:37.306 14:02:40 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:37.306 14:02:40 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:37.306 14:02:40 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:37.306 14:02:40 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:37.306 14:02:40 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:37.307 14:02:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.307 14:02:40 -- common/autotest_common.sh@10 -- # set +x 00:06:37.567 ************************************ 00:06:37.567 START TEST nvmf_tcp 00:06:37.567 ************************************ 00:06:37.567 14:02:41 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:37.567 * Looking for test storage... 00:06:37.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:37.567 14:02:41 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:37.567 14:02:41 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:37.568 14:02:41 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:37.568 14:02:41 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.568 14:02:41 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:37.568 14:02:41 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.568 14:02:41 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:37.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.568 --rc genhtml_branch_coverage=1 00:06:37.568 --rc genhtml_function_coverage=1 00:06:37.568 --rc genhtml_legend=1 00:06:37.568 --rc geninfo_all_blocks=1 00:06:37.568 --rc geninfo_unexecuted_blocks=1 00:06:37.568 00:06:37.568 ' 00:06:37.568 14:02:41 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:37.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.568 --rc genhtml_branch_coverage=1 00:06:37.568 --rc genhtml_function_coverage=1 00:06:37.568 --rc genhtml_legend=1 00:06:37.568 --rc geninfo_all_blocks=1 00:06:37.568 --rc geninfo_unexecuted_blocks=1 00:06:37.568 00:06:37.568 ' 00:06:37.568 14:02:41 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:37.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.568 --rc genhtml_branch_coverage=1 00:06:37.568 --rc genhtml_function_coverage=1 00:06:37.568 --rc genhtml_legend=1 00:06:37.568 --rc geninfo_all_blocks=1 00:06:37.568 --rc geninfo_unexecuted_blocks=1 00:06:37.568 00:06:37.568 ' 00:06:37.568 14:02:41 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:37.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.568 --rc genhtml_branch_coverage=1 00:06:37.568 --rc genhtml_function_coverage=1 00:06:37.568 --rc genhtml_legend=1 00:06:37.568 --rc geninfo_all_blocks=1 00:06:37.568 --rc geninfo_unexecuted_blocks=1 00:06:37.568 00:06:37.568 ' 00:06:37.568 14:02:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:37.568 14:02:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:37.568 14:02:41 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:37.568 14:02:41 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:37.568 14:02:41 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.568 14:02:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:37.568 ************************************ 00:06:37.568 START TEST nvmf_target_core 00:06:37.568 ************************************ 00:06:37.568 14:02:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:37.830 * Looking for test storage... 00:06:37.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:37.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.830 --rc genhtml_branch_coverage=1 00:06:37.830 --rc genhtml_function_coverage=1 00:06:37.830 --rc genhtml_legend=1 00:06:37.830 --rc geninfo_all_blocks=1 00:06:37.830 --rc geninfo_unexecuted_blocks=1 00:06:37.830 00:06:37.830 ' 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:37.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.830 --rc genhtml_branch_coverage=1 00:06:37.830 --rc genhtml_function_coverage=1 00:06:37.830 --rc genhtml_legend=1 00:06:37.830 --rc geninfo_all_blocks=1 00:06:37.830 --rc geninfo_unexecuted_blocks=1 00:06:37.830 00:06:37.830 ' 00:06:37.830 14:02:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:37.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.830 --rc genhtml_branch_coverage=1 00:06:37.830 --rc genhtml_function_coverage=1 00:06:37.830 --rc genhtml_legend=1 00:06:37.831 --rc geninfo_all_blocks=1 00:06:37.831 --rc geninfo_unexecuted_blocks=1 00:06:37.831 00:06:37.831 ' 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:37.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.831 --rc genhtml_branch_coverage=1 00:06:37.831 --rc genhtml_function_coverage=1 00:06:37.831 --rc genhtml_legend=1 00:06:37.831 --rc geninfo_all_blocks=1 00:06:37.831 --rc geninfo_unexecuted_blocks=1 00:06:37.831 00:06:37.831 ' 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:37.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.831 14:02:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:38.093 ************************************ 00:06:38.093 START TEST nvmf_abort 00:06:38.093 ************************************ 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:38.093 * Looking for test storage... 00:06:38.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.093 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:38.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.094 --rc genhtml_branch_coverage=1 00:06:38.094 --rc genhtml_function_coverage=1 00:06:38.094 --rc genhtml_legend=1 00:06:38.094 --rc geninfo_all_blocks=1 00:06:38.094 --rc geninfo_unexecuted_blocks=1 00:06:38.094 00:06:38.094 ' 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:38.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.094 --rc genhtml_branch_coverage=1 00:06:38.094 --rc genhtml_function_coverage=1 00:06:38.094 --rc genhtml_legend=1 00:06:38.094 --rc geninfo_all_blocks=1 00:06:38.094 --rc geninfo_unexecuted_blocks=1 00:06:38.094 00:06:38.094 ' 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:38.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.094 --rc genhtml_branch_coverage=1 00:06:38.094 --rc genhtml_function_coverage=1 00:06:38.094 --rc genhtml_legend=1 00:06:38.094 --rc geninfo_all_blocks=1 00:06:38.094 --rc geninfo_unexecuted_blocks=1 00:06:38.094 00:06:38.094 ' 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:38.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.094 --rc genhtml_branch_coverage=1 00:06:38.094 --rc genhtml_function_coverage=1 00:06:38.094 --rc genhtml_legend=1 00:06:38.094 --rc geninfo_all_blocks=1 00:06:38.094 --rc geninfo_unexecuted_blocks=1 00:06:38.094 00:06:38.094 ' 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:38.094 14:02:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:46.250 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:46.250 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:46.250 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:46.251 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:46.251 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:46.251 Found net devices under 0000:31:00.0: cvl_0_0 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:46.251 Found net devices under 0000:31:00.1: cvl_0_1 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:46.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:46.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:06:46.251 00:06:46.251 --- 10.0.0.2 ping statistics --- 00:06:46.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.251 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:06:46.251 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:46.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:46.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:06:46.252 00:06:46.252 --- 10.0.0.1 ping statistics --- 00:06:46.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.252 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1477220 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1477220 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1477220 ']' 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.252 14:02:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:46.252 [2024-10-13 14:02:49.559906] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:06:46.252 [2024-10-13 14:02:49.559987] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.252 [2024-10-13 14:02:49.704029] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:46.252 [2024-10-13 14:02:49.753260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.252 [2024-10-13 14:02:49.782507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:46.252 [2024-10-13 14:02:49.782549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:46.252 [2024-10-13 14:02:49.782558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.252 [2024-10-13 14:02:49.782565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.252 [2024-10-13 14:02:49.782571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:46.252 [2024-10-13 14:02:49.784369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.252 [2024-10-13 14:02:49.784531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.252 [2024-10-13 14:02:49.784532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.824 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.824 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:46.824 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:46.824 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:46.824 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:46.824 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.824 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:46.824 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.824 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:46.824 [2024-10-13 14:02:50.424780] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.824 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.824 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:46.824 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.824 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:46.824 Malloc0 00:06:46.824 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.824 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:46.824 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.824 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:46.825 Delay0 00:06:46.825 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.825 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:46.825 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.825 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:46.825 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.825 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:46.825 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.825 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:46.825 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.825 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:46.825 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.825 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:46.825 [2024-10-13 14:02:50.511465] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:46.825 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.825 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:46.825 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.825 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:46.825 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.825 14:02:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:47.085 [2024-10-13 14:02:50.740761] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:49.632 [2024-10-13 14:02:52.892912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2572350 is same with the state(6) to be set 00:06:49.632 Initializing NVMe Controllers 00:06:49.632 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:49.632 controller IO queue size 128 less than required 00:06:49.632 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:49.632 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:49.632 Initialization complete. Launching workers. 00:06:49.632 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28655 00:06:49.632 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28720, failed to submit 62 00:06:49.632 success 28659, unsuccessful 61, failed 0 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:49.632 rmmod nvme_tcp 00:06:49.632 rmmod nvme_fabrics 00:06:49.632 rmmod nvme_keyring 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1477220 ']' 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1477220 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1477220 ']' 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1477220 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.632 14:02:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1477220 00:06:49.632 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:49.632 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:49.632 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1477220' 00:06:49.632 killing process with pid 1477220 00:06:49.632 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1477220 00:06:49.632 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1477220 00:06:49.632 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:49.632 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:49.632 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:49.632 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:49.632 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:06:49.632 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:49.632 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:06:49.632 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:49.632 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:49.632 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.633 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:49.633 14:02:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.550 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:51.550 00:06:51.550 real 0m13.703s 00:06:51.550 user 0m14.220s 00:06:51.550 sys 0m6.730s 00:06:51.550 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.550 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:51.550 ************************************ 00:06:51.550 END TEST nvmf_abort 00:06:51.550 ************************************ 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:51.813 ************************************ 00:06:51.813 START TEST nvmf_ns_hotplug_stress 00:06:51.813 ************************************ 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:51.813 * Looking for test storage... 00:06:51.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.813 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:52.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.075 --rc genhtml_branch_coverage=1 00:06:52.075 --rc genhtml_function_coverage=1 00:06:52.075 --rc genhtml_legend=1 00:06:52.075 --rc geninfo_all_blocks=1 00:06:52.075 --rc geninfo_unexecuted_blocks=1 00:06:52.075 00:06:52.075 ' 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:52.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.075 --rc genhtml_branch_coverage=1 00:06:52.075 --rc genhtml_function_coverage=1 00:06:52.075 --rc genhtml_legend=1 00:06:52.075 --rc geninfo_all_blocks=1 00:06:52.075 --rc geninfo_unexecuted_blocks=1 00:06:52.075 00:06:52.075 ' 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:52.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.075 --rc genhtml_branch_coverage=1 00:06:52.075 --rc genhtml_function_coverage=1 00:06:52.075 --rc genhtml_legend=1 00:06:52.075 --rc geninfo_all_blocks=1 00:06:52.075 --rc geninfo_unexecuted_blocks=1 00:06:52.075 00:06:52.075 ' 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:52.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.075 --rc genhtml_branch_coverage=1 00:06:52.075 --rc genhtml_function_coverage=1 00:06:52.075 --rc genhtml_legend=1 00:06:52.075 --rc geninfo_all_blocks=1 00:06:52.075 --rc geninfo_unexecuted_blocks=1 00:06:52.075 00:06:52.075 ' 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.075 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:52.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:52.076 14:02:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:00.220 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:00.220 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:00.220 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:00.220 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:00.220 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:00.220 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:00.220 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:00.220 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:00.220 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:00.220 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:00.220 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:00.220 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:00.220 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:00.221 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:00.221 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:00.221 Found net devices under 0000:31:00.0: cvl_0_0 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:00.221 Found net devices under 0000:31:00.1: cvl_0_1 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.221 14:03:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:00.221 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:00.221 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:00.221 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:00.221 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:00.221 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:00.221 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:00.221 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:00.221 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:00.221 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:00.221 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:00.221 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:00.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:07:00.221 00:07:00.221 --- 10.0.0.2 ping statistics --- 00:07:00.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.221 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:07:00.221 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:00.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:07:00.221 00:07:00.221 --- 10.0.0.1 ping statistics --- 00:07:00.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.221 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:07:00.221 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.221 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1482386 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1482386 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1482386 ']' 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.222 14:03:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:00.222 [2024-10-13 14:03:03.417386] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:07:00.222 [2024-10-13 14:03:03.417450] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.222 [2024-10-13 14:03:03.559409] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:00.222 [2024-10-13 14:03:03.607275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.222 [2024-10-13 14:03:03.634292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:00.222 [2024-10-13 14:03:03.634332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:00.222 [2024-10-13 14:03:03.634341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:00.222 [2024-10-13 14:03:03.634353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:00.222 [2024-10-13 14:03:03.634359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:00.222 [2024-10-13 14:03:03.636136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.222 [2024-10-13 14:03:03.636295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.222 [2024-10-13 14:03:03.636296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.794 14:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.794 14:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:00.794 14:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:00.794 14:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:00.794 14:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:00.794 14:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:00.794 14:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:00.794 14:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:00.794 [2024-10-13 14:03:04.460941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.794 14:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:01.055 14:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.315 [2024-10-13 14:03:04.859029] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.315 14:03:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:01.576 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:01.576 Malloc0 00:07:01.837 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:01.837 Delay0 00:07:01.837 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.098 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:02.358 NULL1 00:07:02.358 14:03:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:02.620 14:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:02.620 14:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1482815 00:07:02.620 14:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:02.620 14:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.620 14:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.880 14:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:02.880 14:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:03.141 true 00:07:03.141 14:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:03.141 14:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.401 14:03:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.401 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:03.401 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:03.661 true 00:07:03.661 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:03.661 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.921 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.921 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:03.921 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:04.181 true 00:07:04.181 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:04.181 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.441 14:03:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.441 14:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:04.441 14:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:04.701 true 00:07:04.701 14:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:04.701 14:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.961 14:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.221 14:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:05.221 14:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:05.221 true 00:07:05.221 14:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:05.221 14:03:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.481 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.741 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:05.741 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:05.741 true 00:07:05.741 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:05.741 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.002 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.262 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:06.262 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:06.262 true 00:07:06.262 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:06.262 14:03:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.522 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.783 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:06.783 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:06.783 true 00:07:07.043 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:07.043 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.043 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.303 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:07.303 14:03:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:07.563 true 00:07:07.563 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:07.563 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.563 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.823 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:07.823 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:08.083 true 00:07:08.083 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:08.083 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.083 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.343 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:08.343 14:03:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:08.603 true 00:07:08.603 14:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:08.603 14:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.889 14:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.889 14:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:08.889 14:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:09.189 true 00:07:09.189 14:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:09.189 14:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.189 14:03:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.468 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:09.468 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:09.729 true 00:07:09.729 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:09.729 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.729 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.990 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:09.990 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:10.250 true 00:07:10.251 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:10.251 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.251 14:03:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.511 14:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:10.511 14:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:10.771 true 00:07:10.771 14:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:10.771 14:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.032 14:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.032 14:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:11.032 14:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:11.292 true 00:07:11.292 14:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:11.292 14:03:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.553 14:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.553 14:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:11.553 14:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:11.814 true 00:07:11.814 14:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:11.814 14:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.077 14:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.077 14:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:12.077 14:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:12.338 true 00:07:12.338 14:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:12.338 14:03:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.598 14:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.598 14:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:12.598 14:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:12.858 true 00:07:12.858 14:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:12.858 14:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.120 14:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.381 14:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:13.381 14:03:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:13.381 true 00:07:13.381 14:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:13.381 14:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.641 14:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.902 14:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:13.902 14:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:13.902 true 00:07:13.902 14:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:13.902 14:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.163 14:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.423 14:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:14.423 14:03:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:14.423 true 00:07:14.423 14:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:14.423 14:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.684 14:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.945 14:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:14.945 14:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:14.945 true 00:07:15.205 14:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:15.205 14:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.205 14:03:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.466 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:15.466 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:15.726 true 00:07:15.726 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:15.726 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.726 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.987 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:15.987 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:16.248 true 00:07:16.248 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:16.248 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.248 14:03:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.508 14:03:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:16.508 14:03:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:16.768 true 00:07:16.768 14:03:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:16.768 14:03:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.029 14:03:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.029 14:03:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:17.029 14:03:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:17.289 true 00:07:17.290 14:03:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:17.290 14:03:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.550 14:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.550 14:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:17.550 14:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:17.810 true 00:07:17.810 14:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:17.810 14:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.069 14:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.069 14:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:18.069 14:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:18.383 true 00:07:18.383 14:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:18.383 14:03:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.642 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.642 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:18.642 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:18.901 true 00:07:18.901 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:18.901 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.161 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.161 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:19.161 14:03:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:19.421 true 00:07:19.422 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:19.422 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.681 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.943 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:19.943 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:19.943 true 00:07:19.943 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:19.943 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.203 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.463 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:20.463 14:03:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:20.463 true 00:07:20.463 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:20.463 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.723 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.984 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:20.984 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:20.984 true 00:07:20.984 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:20.984 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.246 14:03:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.505 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:21.505 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:21.505 true 00:07:21.765 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:21.765 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.765 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.026 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:22.026 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:22.286 true 00:07:22.286 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:22.286 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.286 14:03:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.547 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:22.547 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:22.807 true 00:07:22.807 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:22.807 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.807 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.067 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:23.067 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:23.328 true 00:07:23.328 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:23.328 14:03:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.589 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.589 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:23.589 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:23.849 true 00:07:23.849 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:23.849 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.109 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.109 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:24.109 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:24.369 true 00:07:24.369 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:24.369 14:03:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.629 14:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.629 14:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:24.629 14:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:24.890 true 00:07:24.890 14:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:24.890 14:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.150 14:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.150 14:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:25.150 14:03:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:25.410 true 00:07:25.410 14:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:25.410 14:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.670 14:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.930 14:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:25.930 14:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:25.930 true 00:07:25.930 14:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:25.930 14:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.190 14:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.450 14:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:26.450 14:03:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:26.450 true 00:07:26.450 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:26.450 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.711 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.971 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:26.971 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:26.971 true 00:07:26.971 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:26.971 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.232 14:03:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.492 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:27.492 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:27.752 true 00:07:27.752 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:27.752 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.752 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.012 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:28.012 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:28.273 true 00:07:28.273 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:28.273 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.273 14:03:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.534 14:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:28.534 14:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:28.795 true 00:07:28.795 14:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:28.795 14:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.056 14:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.056 14:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:29.056 14:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:29.316 true 00:07:29.316 14:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:29.316 14:03:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.576 14:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.576 14:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:07:29.576 14:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:07:29.837 true 00:07:29.837 14:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:29.837 14:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.097 14:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.097 14:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:07:30.097 14:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:07:30.357 true 00:07:30.357 14:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:30.357 14:03:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.617 14:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.878 14:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:07:30.878 14:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:07:30.878 true 00:07:30.878 14:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:30.878 14:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.138 14:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.399 14:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:07:31.399 14:03:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:07:31.399 true 00:07:31.399 14:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:31.399 14:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.660 14:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.920 14:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:07:31.920 14:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:07:31.920 true 00:07:31.920 14:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:31.920 14:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.181 14:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.441 14:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:07:32.441 14:03:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:07:32.441 true 00:07:32.701 14:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:32.701 14:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.701 14:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.701 Initializing NVMe Controllers 00:07:32.701 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:32.701 Controller IO queue size 128, less than required. 00:07:32.701 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:32.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:32.701 Initialization complete. Launching workers. 00:07:32.701 ======================================================== 00:07:32.701 Latency(us) 00:07:32.701 Device Information : IOPS MiB/s Average min max 00:07:32.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30556.97 14.92 4188.84 1148.40 11039.95 00:07:32.701 ======================================================== 00:07:32.701 Total : 30556.97 14.92 4188.84 1148.40 11039.95 00:07:32.701 00:07:32.962 14:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:07:32.962 14:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:07:32.962 true 00:07:33.222 14:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1482815 00:07:33.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1482815) - No such process 00:07:33.222 14:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1482815 00:07:33.222 14:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.222 14:03:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:33.482 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:33.482 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:33.482 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:33.482 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:33.482 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:33.742 null0 00:07:33.742 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:33.742 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:33.742 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:33.742 null1 00:07:33.742 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:33.742 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:33.742 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:34.002 null2 00:07:34.002 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:34.002 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:34.002 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:34.263 null3 00:07:34.263 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:34.263 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:34.263 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:34.263 null4 00:07:34.263 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:34.263 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:34.263 14:03:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:34.522 null5 00:07:34.522 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:34.522 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:34.522 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:34.782 null6 00:07:34.782 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:34.782 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:34.782 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:34.782 null7 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1489840 1489842 1489843 1489846 1489848 1489851 1489854 1489855 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:35.042 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.302 14:03:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:35.562 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:35.562 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:35.562 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.562 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:35.562 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:35.562 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:35.562 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:35.562 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:35.562 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.562 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.562 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:35.562 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.562 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.562 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:35.822 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.082 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.083 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.389 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:36.389 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:36.389 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.389 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:36.389 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.389 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.389 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:36.389 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:36.389 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.389 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.389 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:36.389 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.389 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.389 14:03:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:36.389 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.389 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.389 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:36.389 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.389 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.389 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:36.389 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.389 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.389 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.389 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.389 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.389 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.389 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.389 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.389 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.691 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.691 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.691 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.691 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:36.691 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.691 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:36.691 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.691 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.691 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:36.691 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:36.691 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:36.691 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.691 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.691 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:36.691 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.692 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.692 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:36.692 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.692 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.692 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:36.692 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.692 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.692 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.692 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.692 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.692 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.954 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.954 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.954 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:36.954 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.954 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.954 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.954 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.954 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.954 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.954 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:36.954 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.954 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:36.954 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.954 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.954 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:36.954 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.215 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:37.475 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:37.475 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:37.475 14:03:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.475 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:37.734 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.734 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.734 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:37.734 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.734 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.734 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:37.734 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.734 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.734 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:37.734 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:37.734 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:37.734 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:37.735 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.735 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.994 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.994 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.994 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:37.994 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.994 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.994 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:37.995 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.256 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:38.518 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:38.518 14:03:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:38.518 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.518 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.518 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.518 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:38.518 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:38.518 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:38.518 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:38.518 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.518 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.518 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:38.518 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.518 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.518 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.518 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.518 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:38.518 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.778 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.778 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.778 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.778 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.778 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.778 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:38.779 rmmod nvme_tcp 00:07:38.779 rmmod nvme_fabrics 00:07:38.779 rmmod nvme_keyring 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1482386 ']' 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1482386 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1482386 ']' 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1482386 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.779 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1482386 00:07:39.039 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:39.039 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:39.039 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1482386' 00:07:39.039 killing process with pid 1482386 00:07:39.039 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1482386 00:07:39.039 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1482386 00:07:39.039 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:39.039 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:39.039 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:39.039 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:39.039 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:39.039 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:07:39.039 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:07:39.039 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:39.039 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:39.039 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.039 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.039 14:03:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:41.585 00:07:41.585 real 0m49.395s 00:07:41.585 user 3m19.753s 00:07:41.585 sys 0m17.393s 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:41.585 ************************************ 00:07:41.585 END TEST nvmf_ns_hotplug_stress 00:07:41.585 ************************************ 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:41.585 ************************************ 00:07:41.585 START TEST nvmf_delete_subsystem 00:07:41.585 ************************************ 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:41.585 * Looking for test storage... 00:07:41.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.585 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:41.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.586 --rc genhtml_branch_coverage=1 00:07:41.586 --rc genhtml_function_coverage=1 00:07:41.586 --rc genhtml_legend=1 00:07:41.586 --rc geninfo_all_blocks=1 00:07:41.586 --rc geninfo_unexecuted_blocks=1 00:07:41.586 00:07:41.586 ' 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:41.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.586 --rc genhtml_branch_coverage=1 00:07:41.586 --rc genhtml_function_coverage=1 00:07:41.586 --rc genhtml_legend=1 00:07:41.586 --rc geninfo_all_blocks=1 00:07:41.586 --rc geninfo_unexecuted_blocks=1 00:07:41.586 00:07:41.586 ' 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:41.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.586 --rc genhtml_branch_coverage=1 00:07:41.586 --rc genhtml_function_coverage=1 00:07:41.586 --rc genhtml_legend=1 00:07:41.586 --rc geninfo_all_blocks=1 00:07:41.586 --rc geninfo_unexecuted_blocks=1 00:07:41.586 00:07:41.586 ' 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:41.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.586 --rc genhtml_branch_coverage=1 00:07:41.586 --rc genhtml_function_coverage=1 00:07:41.586 --rc genhtml_legend=1 00:07:41.586 --rc geninfo_all_blocks=1 00:07:41.586 --rc geninfo_unexecuted_blocks=1 00:07:41.586 00:07:41.586 ' 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.586 14:03:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:41.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:41.586 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.725 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:49.725 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:49.725 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:49.725 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:49.725 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:49.725 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:49.725 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:49.725 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:49.725 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:49.726 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:49.726 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:49.726 Found net devices under 0000:31:00.0: cvl_0_0 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:49.726 Found net devices under 0000:31:00.1: cvl_0_1 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:49.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:07:49.726 00:07:49.726 --- 10.0.0.2 ping statistics --- 00:07:49.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.726 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:49.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:07:49.726 00:07:49.726 --- 10.0.0.1 ping statistics --- 00:07:49.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.726 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:49.726 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:49.727 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:49.727 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.727 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1495340 00:07:49.727 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1495340 00:07:49.727 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:49.727 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1495340 ']' 00:07:49.727 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.727 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.727 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.727 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.727 14:03:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.727 [2024-10-13 14:03:52.786554] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:07:49.727 [2024-10-13 14:03:52.786621] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.727 [2024-10-13 14:03:52.928153] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:49.727 [2024-10-13 14:03:52.978244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:49.727 [2024-10-13 14:03:53.005167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.727 [2024-10-13 14:03:53.005209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.727 [2024-10-13 14:03:53.005217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.727 [2024-10-13 14:03:53.005224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.727 [2024-10-13 14:03:53.005230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.727 [2024-10-13 14:03:53.006853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.727 [2024-10-13 14:03:53.006857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.988 [2024-10-13 14:03:53.659652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.988 [2024-10-13 14:03:53.683864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.988 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.250 NULL1 00:07:50.250 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.250 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:50.250 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.250 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.250 Delay0 00:07:50.250 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.250 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.250 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.250 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.250 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.250 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1495419 00:07:50.250 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:50.250 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:50.250 [2024-10-13 14:03:53.900672] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:52.164 14:03:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.164 14:03:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.164 14:03:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.426 Write completed with error (sct=0, sc=8) 00:07:52.426 Write completed with error (sct=0, sc=8) 00:07:52.426 Write completed with error (sct=0, sc=8) 00:07:52.426 starting I/O failed: -6 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Write completed with error (sct=0, sc=8) 00:07:52.426 starting I/O failed: -6 00:07:52.426 Write completed with error (sct=0, sc=8) 00:07:52.426 Write completed with error (sct=0, sc=8) 00:07:52.426 Write completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 starting I/O failed: -6 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Write completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 starting I/O failed: -6 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 starting I/O failed: -6 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Write completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 starting I/O failed: -6 00:07:52.426 Write completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 starting I/O failed: -6 00:07:52.426 Write completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 starting I/O failed: -6 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 starting I/O failed: -6 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Write completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 starting I/O failed: -6 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Write completed with error (sct=0, sc=8) 00:07:52.426 Write completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 starting I/O failed: -6 00:07:52.426 Write completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 [2024-10-13 14:03:56.024196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x679df0 is same with the state(6) to be set 00:07:52.426 Write completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Read completed with error (sct=0, sc=8) 00:07:52.426 Write completed with error (sct=0, sc=8) 00:07:52.426 Write completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 starting I/O failed: -6 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 starting I/O failed: -6 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 starting I/O failed: -6 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 starting I/O failed: -6 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 starting I/O failed: -6 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 starting I/O failed: -6 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 starting I/O failed: -6 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 starting I/O failed: -6 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 starting I/O failed: -6 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 starting I/O failed: -6 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 [2024-10-13 14:03:56.025626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f950c000c00 is same with the state(6) to be set 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Write completed with error (sct=0, sc=8) 00:07:52.427 Read completed with error (sct=0, sc=8) 00:07:53.369 [2024-10-13 14:03:57.000583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67eee0 is same with the state(6) to be set 00:07:53.369 Read completed with error (sct=0, sc=8) 00:07:53.369 Read completed with error (sct=0, sc=8) 00:07:53.369 Read completed with error (sct=0, sc=8) 00:07:53.369 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 [2024-10-13 14:03:57.025337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x679fd0 is same with the state(6) to be set 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 [2024-10-13 14:03:57.026140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f950c00cfe0 is same with the state(6) to be set 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 [2024-10-13 14:03:57.026230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f950c00d780 is same with the state(6) to be set 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Write completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 Read completed with error (sct=0, sc=8) 00:07:53.370 [2024-10-13 14:03:57.026539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x67a390 is same with the state(6) to be set 00:07:53.370 Initializing NVMe Controllers 00:07:53.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:53.370 Controller IO queue size 128, less than required. 00:07:53.370 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:53.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:53.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:53.370 Initialization complete. Launching workers. 00:07:53.370 ======================================================== 00:07:53.370 Latency(us) 00:07:53.370 Device Information : IOPS MiB/s Average min max 00:07:53.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.04 0.08 895680.91 383.83 1009156.17 00:07:53.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.13 0.07 956377.12 323.36 2001549.90 00:07:53.370 ======================================================== 00:07:53.370 Total : 322.17 0.16 924530.34 323.36 2001549.90 00:07:53.370 00:07:53.370 [2024-10-13 14:03:57.026896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x67eee0 (9): Bad file descriptor 00:07:53.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:53.370 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.370 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:53.370 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1495419 00:07:53.370 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1495419 00:07:53.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1495419) - No such process 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1495419 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1495419 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1495419 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:53.941 [2024-10-13 14:03:57.554973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1496213 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:53.941 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:53.942 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1496213 00:07:53.942 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:54.202 [2024-10-13 14:03:57.744498] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:54.462 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:54.462 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1496213 00:07:54.462 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:55.033 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:55.033 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1496213 00:07:55.033 14:03:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:55.604 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:55.604 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1496213 00:07:55.604 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:56.176 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:56.176 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1496213 00:07:56.176 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:56.437 14:04:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:56.437 14:04:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1496213 00:07:56.437 14:04:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:57.008 14:04:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:57.008 14:04:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1496213 00:07:57.008 14:04:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:57.578 Initializing NVMe Controllers 00:07:57.578 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:57.578 Controller IO queue size 128, less than required. 00:07:57.578 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:57.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:57.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:57.578 Initialization complete. Launching workers. 00:07:57.578 ======================================================== 00:07:57.578 Latency(us) 00:07:57.578 Device Information : IOPS MiB/s Average min max 00:07:57.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001568.47 1000015.84 1004469.97 00:07:57.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003706.18 1000146.86 1041845.79 00:07:57.578 ======================================================== 00:07:57.578 Total : 256.00 0.12 1002637.33 1000015.84 1041845.79 00:07:57.578 00:07:57.578 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1496213 00:07:57.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1496213) - No such process 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1496213 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:57.579 rmmod nvme_tcp 00:07:57.579 rmmod nvme_fabrics 00:07:57.579 rmmod nvme_keyring 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1495340 ']' 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1495340 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1495340 ']' 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1495340 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1495340 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1495340' 00:07:57.579 killing process with pid 1495340 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1495340 00:07:57.579 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1495340 00:07:57.839 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:57.839 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:57.839 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:57.839 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:57.839 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:07:57.839 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:57.839 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:07:57.839 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:57.839 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:57.839 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.839 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.839 14:04:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.750 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:59.750 00:07:59.750 real 0m18.617s 00:07:59.750 user 0m30.933s 00:07:59.750 sys 0m6.951s 00:07:59.750 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.750 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.750 ************************************ 00:07:59.750 END TEST nvmf_delete_subsystem 00:07:59.750 ************************************ 00:07:59.750 14:04:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:59.750 14:04:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:59.750 14:04:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.750 14:04:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:00.011 ************************************ 00:08:00.011 START TEST nvmf_host_management 00:08:00.011 ************************************ 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:00.011 * Looking for test storage... 00:08:00.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:00.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.011 --rc genhtml_branch_coverage=1 00:08:00.011 --rc genhtml_function_coverage=1 00:08:00.011 --rc genhtml_legend=1 00:08:00.011 --rc geninfo_all_blocks=1 00:08:00.011 --rc geninfo_unexecuted_blocks=1 00:08:00.011 00:08:00.011 ' 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:00.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.011 --rc genhtml_branch_coverage=1 00:08:00.011 --rc genhtml_function_coverage=1 00:08:00.011 --rc genhtml_legend=1 00:08:00.011 --rc geninfo_all_blocks=1 00:08:00.011 --rc geninfo_unexecuted_blocks=1 00:08:00.011 00:08:00.011 ' 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:00.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.011 --rc genhtml_branch_coverage=1 00:08:00.011 --rc genhtml_function_coverage=1 00:08:00.011 --rc genhtml_legend=1 00:08:00.011 --rc geninfo_all_blocks=1 00:08:00.011 --rc geninfo_unexecuted_blocks=1 00:08:00.011 00:08:00.011 ' 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:00.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.011 --rc genhtml_branch_coverage=1 00:08:00.011 --rc genhtml_function_coverage=1 00:08:00.011 --rc genhtml_legend=1 00:08:00.011 --rc geninfo_all_blocks=1 00:08:00.011 --rc geninfo_unexecuted_blocks=1 00:08:00.011 00:08:00.011 ' 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.011 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:00.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:00.273 14:04:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.411 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:08.411 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:08.412 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:08.412 Found net devices under 0000:31:00.0: cvl_0_0 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:08.412 Found net devices under 0000:31:00.1: cvl_0_1 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:08.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:08:08.412 00:08:08.412 --- 10.0.0.2 ping statistics --- 00:08:08.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.412 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:08:08.412 00:08:08.412 --- 10.0.0.1 ping statistics --- 00:08:08.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.412 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1501309 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1501309 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1501309 ']' 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.412 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.412 [2024-10-13 14:04:11.518266] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:08:08.412 [2024-10-13 14:04:11.518332] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.412 [2024-10-13 14:04:11.660525] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:08.412 [2024-10-13 14:04:11.708238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.412 [2024-10-13 14:04:11.738223] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.412 [2024-10-13 14:04:11.738267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.412 [2024-10-13 14:04:11.738282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.412 [2024-10-13 14:04:11.738292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.413 [2024-10-13 14:04:11.738299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.413 [2024-10-13 14:04:11.740278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.413 [2024-10-13 14:04:11.740432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:08.413 [2024-10-13 14:04:11.740590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.413 [2024-10-13 14:04:11.740591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:08.673 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.673 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:08.673 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:08.673 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:08.673 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.934 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.934 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:08.934 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.934 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.935 [2024-10-13 14:04:12.394226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.935 Malloc0 00:08:08.935 [2024-10-13 14:04:12.471879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1501568 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1501568 /var/tmp/bdevperf.sock 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1501568 ']' 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:08.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:08.935 { 00:08:08.935 "params": { 00:08:08.935 "name": "Nvme$subsystem", 00:08:08.935 "trtype": "$TEST_TRANSPORT", 00:08:08.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:08.935 "adrfam": "ipv4", 00:08:08.935 "trsvcid": "$NVMF_PORT", 00:08:08.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:08.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:08.935 "hdgst": ${hdgst:-false}, 00:08:08.935 "ddgst": ${ddgst:-false} 00:08:08.935 }, 00:08:08.935 "method": "bdev_nvme_attach_controller" 00:08:08.935 } 00:08:08.935 EOF 00:08:08.935 )") 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:08:08.935 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:08.935 "params": { 00:08:08.935 "name": "Nvme0", 00:08:08.935 "trtype": "tcp", 00:08:08.935 "traddr": "10.0.0.2", 00:08:08.935 "adrfam": "ipv4", 00:08:08.935 "trsvcid": "4420", 00:08:08.935 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:08.935 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:08.935 "hdgst": false, 00:08:08.935 "ddgst": false 00:08:08.935 }, 00:08:08.935 "method": "bdev_nvme_attach_controller" 00:08:08.935 }' 00:08:08.935 [2024-10-13 14:04:12.580308] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:08:08.935 [2024-10-13 14:04:12.580379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501568 ] 00:08:09.195 [2024-10-13 14:04:12.715326] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:09.195 [2024-10-13 14:04:12.765670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.195 [2024-10-13 14:04:12.793765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.456 Running I/O for 10 seconds... 00:08:09.717 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.717 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:09.717 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:09.717 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.717 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.979 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.979 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:09.979 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:09.979 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:09.979 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:09.979 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:09.979 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:09.979 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:09.979 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:09.979 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:09.980 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.980 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:09.980 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.980 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.980 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=789 00:08:09.980 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 789 -ge 100 ']' 00:08:09.980 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:09.980 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:09.980 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:09.980 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:09.980 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.980 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.980 [2024-10-13 14:04:13.485373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8937c0 is same with the state(6) to be set 00:08:09.980 [2024-10-13 14:04:13.485431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8937c0 is same with the state(6) to be set 00:08:09.980 [2024-10-13 14:04:13.485440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8937c0 is same with the state(6) to be set 00:08:09.980 [2024-10-13 14:04:13.485449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8937c0 is same with the state(6) to be set 00:08:09.980 [2024-10-13 14:04:13.485457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8937c0 is same with the state(6) to be set 00:08:09.980 [2024-10-13 14:04:13.485464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8937c0 is same with the state(6) to be set 00:08:09.980 [2024-10-13 14:04:13.485471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8937c0 is same with the state(6) to be set 00:08:09.980 [2024-10-13 14:04:13.485478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8937c0 is same with the state(6) to be set 00:08:09.980 [2024-10-13 14:04:13.485485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8937c0 is same with the state(6) to be set 00:08:09.980 [2024-10-13 14:04:13.486073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:09.980 [2024-10-13 14:04:13.486134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.486146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:09.980 [2024-10-13 14:04:13.486162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.486171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:09.980 [2024-10-13 14:04:13.486179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.486188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:09.980 [2024-10-13 14:04:13.486196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.486204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1384dc0 is same with the state(6) to be set 00:08:09.980 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.980 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:09.980 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.980 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.980 [2024-10-13 14:04:13.495717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.495753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.495772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.495780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.495790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.495798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.495808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.495816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.495825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.495833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.495842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.495850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.495860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.495867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.495877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.495885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.495902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.495909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.495919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.495926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.495936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.495943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.495953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.495960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.495969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.495977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.495987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.495994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.496003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.496012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.496021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.496029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.496039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.496046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.496056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.496073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.496083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.496090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.496100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.496108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.496118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.496128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.496138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.496145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.496155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.496162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.496172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.496179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.496189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.496197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.496206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.980 [2024-10-13 14:04:13.496213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.980 [2024-10-13 14:04:13.496223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.981 [2024-10-13 14:04:13.496878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:09.981 [2024-10-13 14:04:13.496970] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x159e020 was disconnected and freed. reset controller. 00:08:09.981 [2024-10-13 14:04:13.497009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1384dc0 (9): Bad file descriptor 00:08:09.981 [2024-10-13 14:04:13.498210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:09.981 task offset: 113792 on job bdev=Nvme0n1 fails 00:08:09.981 00:08:09.981 Latency(us) 00:08:09.981 [2024-10-13T12:04:13.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.981 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:09.981 Job: Nvme0n1 ended in about 0.55 seconds with error 00:08:09.981 Verification LBA range: start 0x0 length 0x400 00:08:09.981 Nvme0n1 : 0.55 1617.19 101.07 116.42 0.00 35974.13 1792.77 34377.39 00:08:09.981 [2024-10-13T12:04:13.688Z] =================================================================================================================== 00:08:09.982 [2024-10-13T12:04:13.689Z] Total : 1617.19 101.07 116.42 0.00 35974.13 1792.77 34377.39 00:08:09.982 [2024-10-13 14:04:13.500406] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.982 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.982 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:09.982 [2024-10-13 14:04:13.594311] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:10.922 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1501568 00:08:10.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1501568) - No such process 00:08:10.922 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:10.922 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:10.922 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:10.922 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:10.922 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:08:10.922 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:08:10.922 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:10.923 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:10.923 { 00:08:10.923 "params": { 00:08:10.923 "name": "Nvme$subsystem", 00:08:10.923 "trtype": "$TEST_TRANSPORT", 00:08:10.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:10.923 "adrfam": "ipv4", 00:08:10.923 "trsvcid": "$NVMF_PORT", 00:08:10.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:10.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:10.923 "hdgst": ${hdgst:-false}, 00:08:10.923 "ddgst": ${ddgst:-false} 00:08:10.923 }, 00:08:10.923 "method": "bdev_nvme_attach_controller" 00:08:10.923 } 00:08:10.923 EOF 00:08:10.923 )") 00:08:10.923 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:08:10.923 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:08:10.923 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:08:10.923 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:10.923 "params": { 00:08:10.923 "name": "Nvme0", 00:08:10.923 "trtype": "tcp", 00:08:10.923 "traddr": "10.0.0.2", 00:08:10.923 "adrfam": "ipv4", 00:08:10.923 "trsvcid": "4420", 00:08:10.923 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:10.923 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:10.923 "hdgst": false, 00:08:10.923 "ddgst": false 00:08:10.923 }, 00:08:10.923 "method": "bdev_nvme_attach_controller" 00:08:10.923 }' 00:08:10.923 [2024-10-13 14:04:14.560851] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:08:10.923 [2024-10-13 14:04:14.560904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501921 ] 00:08:11.183 [2024-10-13 14:04:14.691480] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:11.183 [2024-10-13 14:04:14.740767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.183 [2024-10-13 14:04:14.757855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.444 Running I/O for 1 seconds... 00:08:12.386 1792.00 IOPS, 112.00 MiB/s 00:08:12.386 Latency(us) 00:08:12.386 [2024-10-13T12:04:16.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.386 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:12.386 Verification LBA range: start 0x0 length 0x400 00:08:12.386 Nvme0n1 : 1.02 1815.89 113.49 0.00 0.00 34606.81 6267.85 31311.89 00:08:12.386 [2024-10-13T12:04:16.093Z] =================================================================================================================== 00:08:12.386 [2024-10-13T12:04:16.093Z] Total : 1815.89 113.49 0.00 0.00 34606.81 6267.85 31311.89 00:08:12.386 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:12.386 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:12.386 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:12.386 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:12.386 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:12.386 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:12.387 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:12.387 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:12.387 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:12.387 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:12.387 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:12.387 rmmod nvme_tcp 00:08:12.387 rmmod nvme_fabrics 00:08:12.387 rmmod nvme_keyring 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1501309 ']' 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1501309 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1501309 ']' 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1501309 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1501309 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1501309' 00:08:12.648 killing process with pid 1501309 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1501309 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1501309 00:08:12.648 [2024-10-13 14:04:16.269000] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.648 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:15.192 00:08:15.192 real 0m14.878s 00:08:15.192 user 0m22.766s 00:08:15.192 sys 0m6.943s 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:15.192 ************************************ 00:08:15.192 END TEST nvmf_host_management 00:08:15.192 ************************************ 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:15.192 ************************************ 00:08:15.192 START TEST nvmf_lvol 00:08:15.192 ************************************ 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:15.192 * Looking for test storage... 00:08:15.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.192 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:15.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.192 --rc genhtml_branch_coverage=1 00:08:15.192 --rc genhtml_function_coverage=1 00:08:15.192 --rc genhtml_legend=1 00:08:15.192 --rc geninfo_all_blocks=1 00:08:15.192 --rc geninfo_unexecuted_blocks=1 00:08:15.192 00:08:15.193 ' 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:15.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.193 --rc genhtml_branch_coverage=1 00:08:15.193 --rc genhtml_function_coverage=1 00:08:15.193 --rc genhtml_legend=1 00:08:15.193 --rc geninfo_all_blocks=1 00:08:15.193 --rc geninfo_unexecuted_blocks=1 00:08:15.193 00:08:15.193 ' 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:15.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.193 --rc genhtml_branch_coverage=1 00:08:15.193 --rc genhtml_function_coverage=1 00:08:15.193 --rc genhtml_legend=1 00:08:15.193 --rc geninfo_all_blocks=1 00:08:15.193 --rc geninfo_unexecuted_blocks=1 00:08:15.193 00:08:15.193 ' 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:15.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.193 --rc genhtml_branch_coverage=1 00:08:15.193 --rc genhtml_function_coverage=1 00:08:15.193 --rc genhtml_legend=1 00:08:15.193 --rc geninfo_all_blocks=1 00:08:15.193 --rc geninfo_unexecuted_blocks=1 00:08:15.193 00:08:15.193 ' 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:15.193 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:23.334 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:23.334 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:23.334 Found net devices under 0000:31:00.0: cvl_0_0 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:23.335 Found net devices under 0000:31:00.1: cvl_0_1 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:23.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:08:23.335 00:08:23.335 --- 10.0.0.2 ping statistics --- 00:08:23.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.335 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:08:23.335 00:08:23.335 --- 10.0.0.1 ping statistics --- 00:08:23.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.335 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1506669 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1506669 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1506669 ']' 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.335 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:23.335 [2024-10-13 14:04:26.474360] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:08:23.335 [2024-10-13 14:04:26.474421] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.335 [2024-10-13 14:04:26.615945] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:23.335 [2024-10-13 14:04:26.665865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:23.335 [2024-10-13 14:04:26.693333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.335 [2024-10-13 14:04:26.693372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.335 [2024-10-13 14:04:26.693381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.335 [2024-10-13 14:04:26.693388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.335 [2024-10-13 14:04:26.693395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.335 [2024-10-13 14:04:26.695387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.335 [2024-10-13 14:04:26.695543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.335 [2024-10-13 14:04:26.695544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.723 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.723 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:23.723 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:23.723 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:23.723 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:23.723 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.723 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:24.023 [2024-10-13 14:04:27.493955] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.023 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:24.284 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:24.284 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:24.284 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:24.284 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:24.545 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:24.806 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2ba1fcbf-3393-422c-86dc-56fe319e8f13 00:08:24.806 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2ba1fcbf-3393-422c-86dc-56fe319e8f13 lvol 20 00:08:25.068 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8b9dbcad-0626-42a6-b468-0505a408064c 00:08:25.068 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:25.328 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8b9dbcad-0626-42a6-b468-0505a408064c 00:08:25.329 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:25.589 [2024-10-13 14:04:29.116947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.589 14:04:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:25.850 14:04:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1507359 00:08:25.850 14:04:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:25.850 14:04:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:26.791 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8b9dbcad-0626-42a6-b468-0505a408064c MY_SNAPSHOT 00:08:27.052 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=728e0e0f-b503-4f9a-afc0-10c7c1955702 00:08:27.052 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8b9dbcad-0626-42a6-b468-0505a408064c 30 00:08:27.312 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 728e0e0f-b503-4f9a-afc0-10c7c1955702 MY_CLONE 00:08:27.312 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=693907ee-f590-442e-ade1-7ecd1ef5570c 00:08:27.312 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 693907ee-f590-442e-ade1-7ecd1ef5570c 00:08:27.883 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1507359 00:08:36.023 Initializing NVMe Controllers 00:08:36.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:36.023 Controller IO queue size 128, less than required. 00:08:36.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:36.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:36.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:36.023 Initialization complete. Launching workers. 00:08:36.023 ======================================================== 00:08:36.023 Latency(us) 00:08:36.023 Device Information : IOPS MiB/s Average min max 00:08:36.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16544.50 64.63 7737.94 1515.99 57773.16 00:08:36.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17294.60 67.56 7401.33 607.63 60042.09 00:08:36.024 ======================================================== 00:08:36.024 Total : 33839.10 132.18 7565.91 607.63 60042.09 00:08:36.024 00:08:36.024 14:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:36.284 14:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8b9dbcad-0626-42a6-b468-0505a408064c 00:08:36.545 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2ba1fcbf-3393-422c-86dc-56fe319e8f13 00:08:36.545 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:36.545 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:36.545 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:36.545 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:36.545 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:36.545 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:36.545 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:36.545 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:36.545 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:36.545 rmmod nvme_tcp 00:08:36.545 rmmod nvme_fabrics 00:08:36.806 rmmod nvme_keyring 00:08:36.806 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:36.806 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:36.806 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:36.806 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1506669 ']' 00:08:36.806 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1506669 00:08:36.806 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1506669 ']' 00:08:36.806 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1506669 00:08:36.806 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:36.806 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.806 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1506669 00:08:36.806 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:36.806 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:36.806 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1506669' 00:08:36.806 killing process with pid 1506669 00:08:36.806 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1506669 00:08:36.806 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1506669 00:08:36.806 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:36.807 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:36.807 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:36.807 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:36.807 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:08:36.807 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:36.807 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:08:36.807 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:36.807 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:36.807 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.807 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.807 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:39.353 00:08:39.353 real 0m24.102s 00:08:39.353 user 1m4.406s 00:08:39.353 sys 0m8.704s 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:39.353 ************************************ 00:08:39.353 END TEST nvmf_lvol 00:08:39.353 ************************************ 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.353 ************************************ 00:08:39.353 START TEST nvmf_lvs_grow 00:08:39.353 ************************************ 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:39.353 * Looking for test storage... 00:08:39.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:39.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.353 --rc genhtml_branch_coverage=1 00:08:39.353 --rc genhtml_function_coverage=1 00:08:39.353 --rc genhtml_legend=1 00:08:39.353 --rc geninfo_all_blocks=1 00:08:39.353 --rc geninfo_unexecuted_blocks=1 00:08:39.353 00:08:39.353 ' 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:39.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.353 --rc genhtml_branch_coverage=1 00:08:39.353 --rc genhtml_function_coverage=1 00:08:39.353 --rc genhtml_legend=1 00:08:39.353 --rc geninfo_all_blocks=1 00:08:39.353 --rc geninfo_unexecuted_blocks=1 00:08:39.353 00:08:39.353 ' 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:39.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.353 --rc genhtml_branch_coverage=1 00:08:39.353 --rc genhtml_function_coverage=1 00:08:39.353 --rc genhtml_legend=1 00:08:39.353 --rc geninfo_all_blocks=1 00:08:39.353 --rc geninfo_unexecuted_blocks=1 00:08:39.353 00:08:39.353 ' 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:39.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.353 --rc genhtml_branch_coverage=1 00:08:39.353 --rc genhtml_function_coverage=1 00:08:39.353 --rc genhtml_legend=1 00:08:39.353 --rc geninfo_all_blocks=1 00:08:39.353 --rc geninfo_unexecuted_blocks=1 00:08:39.353 00:08:39.353 ' 00:08:39.353 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:39.354 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:47.498 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:47.498 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:47.498 Found net devices under 0000:31:00.0: cvl_0_0 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:47.498 Found net devices under 0000:31:00.1: cvl_0_1 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.498 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:47.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:08:47.499 00:08:47.499 --- 10.0.0.2 ping statistics --- 00:08:47.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.499 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:08:47.499 00:08:47.499 --- 10.0.0.1 ping statistics --- 00:08:47.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.499 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1513814 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1513814 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1513814 ']' 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.499 14:04:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:47.499 [2024-10-13 14:04:50.628927] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:08:47.499 [2024-10-13 14:04:50.628990] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.499 [2024-10-13 14:04:50.769863] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:47.499 [2024-10-13 14:04:50.820329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.499 [2024-10-13 14:04:50.846758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.499 [2024-10-13 14:04:50.846797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.499 [2024-10-13 14:04:50.846805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.499 [2024-10-13 14:04:50.846812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.499 [2024-10-13 14:04:50.846818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.499 [2024-10-13 14:04:50.847558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.760 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:47.760 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:47.760 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:47.760 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:47.760 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.022 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.022 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:48.022 [2024-10-13 14:04:51.647992] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.022 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:48.022 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:48.022 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.022 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.022 ************************************ 00:08:48.022 START TEST lvs_grow_clean 00:08:48.022 ************************************ 00:08:48.022 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:48.022 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:48.022 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:48.022 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:48.022 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:48.022 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:48.022 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:48.022 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.283 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.283 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:48.283 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:48.283 14:04:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:48.544 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8ff90d4b-d2ae-4011-b8fd-b4f6f37699ec 00:08:48.544 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ff90d4b-d2ae-4011-b8fd-b4f6f37699ec 00:08:48.544 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:48.805 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:48.805 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:48.805 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8ff90d4b-d2ae-4011-b8fd-b4f6f37699ec lvol 150 00:08:49.067 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c324f2fd-a5bb-4431-83fc-1a1f9227c17a 00:08:49.067 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:49.067 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:49.067 [2024-10-13 14:04:52.679765] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:49.067 [2024-10-13 14:04:52.679832] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:49.067 true 00:08:49.067 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:49.067 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ff90d4b-d2ae-4011-b8fd-b4f6f37699ec 00:08:49.327 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:49.328 14:04:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:49.588 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c324f2fd-a5bb-4431-83fc-1a1f9227c17a 00:08:49.588 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:49.848 [2024-10-13 14:04:53.400398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.848 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:50.109 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1514498 00:08:50.109 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:50.109 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:50.109 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1514498 /var/tmp/bdevperf.sock 00:08:50.109 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1514498 ']' 00:08:50.109 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:50.109 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.109 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:50.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:50.109 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.109 14:04:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:50.109 [2024-10-13 14:04:53.666904] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:08:50.109 [2024-10-13 14:04:53.666970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1514498 ] 00:08:50.109 [2024-10-13 14:04:53.801225] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:50.369 [2024-10-13 14:04:53.850228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.369 [2024-10-13 14:04:53.878073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.940 14:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.940 14:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:50.940 14:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:51.201 Nvme0n1 00:08:51.201 14:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:51.201 [ 00:08:51.201 { 00:08:51.201 "name": "Nvme0n1", 00:08:51.201 "aliases": [ 00:08:51.201 "c324f2fd-a5bb-4431-83fc-1a1f9227c17a" 00:08:51.201 ], 00:08:51.201 "product_name": "NVMe disk", 00:08:51.201 "block_size": 4096, 00:08:51.201 "num_blocks": 38912, 00:08:51.201 "uuid": "c324f2fd-a5bb-4431-83fc-1a1f9227c17a", 00:08:51.201 "numa_id": 0, 00:08:51.201 "assigned_rate_limits": { 00:08:51.201 "rw_ios_per_sec": 0, 00:08:51.201 "rw_mbytes_per_sec": 0, 00:08:51.201 "r_mbytes_per_sec": 0, 00:08:51.201 "w_mbytes_per_sec": 0 00:08:51.201 }, 00:08:51.201 "claimed": false, 00:08:51.201 "zoned": false, 00:08:51.201 "supported_io_types": { 00:08:51.201 "read": true, 00:08:51.201 "write": true, 00:08:51.201 "unmap": true, 00:08:51.201 "flush": true, 00:08:51.201 "reset": true, 00:08:51.201 "nvme_admin": true, 00:08:51.201 "nvme_io": true, 00:08:51.201 "nvme_io_md": false, 00:08:51.201 "write_zeroes": true, 00:08:51.201 "zcopy": false, 00:08:51.201 "get_zone_info": false, 00:08:51.201 "zone_management": false, 00:08:51.201 "zone_append": false, 00:08:51.201 "compare": true, 00:08:51.201 "compare_and_write": true, 00:08:51.201 "abort": true, 00:08:51.201 "seek_hole": false, 00:08:51.201 "seek_data": false, 00:08:51.201 "copy": true, 00:08:51.201 "nvme_iov_md": false 00:08:51.201 }, 00:08:51.201 "memory_domains": [ 00:08:51.201 { 00:08:51.201 "dma_device_id": "system", 00:08:51.201 "dma_device_type": 1 00:08:51.201 } 00:08:51.201 ], 00:08:51.201 "driver_specific": { 00:08:51.201 "nvme": [ 00:08:51.201 { 00:08:51.201 "trid": { 00:08:51.201 "trtype": "TCP", 00:08:51.201 "adrfam": "IPv4", 00:08:51.201 "traddr": "10.0.0.2", 00:08:51.201 "trsvcid": "4420", 00:08:51.201 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:51.201 }, 00:08:51.201 "ctrlr_data": { 00:08:51.201 "cntlid": 1, 00:08:51.201 "vendor_id": "0x8086", 00:08:51.201 "model_number": "SPDK bdev Controller", 00:08:51.201 "serial_number": "SPDK0", 00:08:51.201 "firmware_revision": "25.01", 00:08:51.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:51.201 "oacs": { 00:08:51.201 "security": 0, 00:08:51.201 "format": 0, 00:08:51.201 "firmware": 0, 00:08:51.201 "ns_manage": 0 00:08:51.201 }, 00:08:51.201 "multi_ctrlr": true, 00:08:51.201 "ana_reporting": false 00:08:51.201 }, 00:08:51.201 "vs": { 00:08:51.201 "nvme_version": "1.3" 00:08:51.201 }, 00:08:51.201 "ns_data": { 00:08:51.201 "id": 1, 00:08:51.201 "can_share": true 00:08:51.201 } 00:08:51.201 } 00:08:51.201 ], 00:08:51.201 "mp_policy": "active_passive" 00:08:51.201 } 00:08:51.201 } 00:08:51.201 ] 00:08:51.462 14:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1514651 00:08:51.462 14:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:51.462 14:04:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:51.462 Running I/O for 10 seconds... 00:08:52.403 Latency(us) 00:08:52.403 [2024-10-13T12:04:56.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.403 Nvme0n1 : 1.00 24249.00 94.72 0.00 0.00 0.00 0.00 0.00 00:08:52.403 [2024-10-13T12:04:56.110Z] =================================================================================================================== 00:08:52.403 [2024-10-13T12:04:56.110Z] Total : 24249.00 94.72 0.00 0.00 0.00 0.00 0.00 00:08:52.403 00:08:53.345 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8ff90d4b-d2ae-4011-b8fd-b4f6f37699ec 00:08:53.345 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.345 Nvme0n1 : 2.00 24822.00 96.96 0.00 0.00 0.00 0.00 0.00 00:08:53.345 [2024-10-13T12:04:57.052Z] =================================================================================================================== 00:08:53.345 [2024-10-13T12:04:57.052Z] Total : 24822.00 96.96 0.00 0.00 0.00 0.00 0.00 00:08:53.345 00:08:53.605 true 00:08:53.605 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ff90d4b-d2ae-4011-b8fd-b4f6f37699ec 00:08:53.605 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:53.605 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:53.605 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:53.605 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1514651 00:08:54.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.547 Nvme0n1 : 3.00 25031.33 97.78 0.00 0.00 0.00 0.00 0.00 00:08:54.547 [2024-10-13T12:04:58.254Z] =================================================================================================================== 00:08:54.547 [2024-10-13T12:04:58.254Z] Total : 25031.33 97.78 0.00 0.00 0.00 0.00 0.00 00:08:54.547 00:08:55.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.491 Nvme0n1 : 4.00 25141.25 98.21 0.00 0.00 0.00 0.00 0.00 00:08:55.491 [2024-10-13T12:04:59.198Z] =================================================================================================================== 00:08:55.491 [2024-10-13T12:04:59.198Z] Total : 25141.25 98.21 0.00 0.00 0.00 0.00 0.00 00:08:55.491 00:08:56.432 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.432 Nvme0n1 : 5.00 25220.00 98.52 0.00 0.00 0.00 0.00 0.00 00:08:56.432 [2024-10-13T12:05:00.139Z] =================================================================================================================== 00:08:56.432 [2024-10-13T12:05:00.139Z] Total : 25220.00 98.52 0.00 0.00 0.00 0.00 0.00 00:08:56.432 00:08:57.371 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.371 Nvme0n1 : 6.00 25272.33 98.72 0.00 0.00 0.00 0.00 0.00 00:08:57.371 [2024-10-13T12:05:01.078Z] =================================================================================================================== 00:08:57.371 [2024-10-13T12:05:01.078Z] Total : 25272.33 98.72 0.00 0.00 0.00 0.00 0.00 00:08:57.371 00:08:58.754 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.754 Nvme0n1 : 7.00 25309.57 98.87 0.00 0.00 0.00 0.00 0.00 00:08:58.754 [2024-10-13T12:05:02.461Z] =================================================================================================================== 00:08:58.754 [2024-10-13T12:05:02.461Z] Total : 25309.57 98.87 0.00 0.00 0.00 0.00 0.00 00:08:58.754 00:08:59.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.694 Nvme0n1 : 8.00 25345.88 99.01 0.00 0.00 0.00 0.00 0.00 00:08:59.694 [2024-10-13T12:05:03.401Z] =================================================================================================================== 00:08:59.694 [2024-10-13T12:05:03.401Z] Total : 25345.88 99.01 0.00 0.00 0.00 0.00 0.00 00:08:59.694 00:09:00.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.633 Nvme0n1 : 9.00 25374.00 99.12 0.00 0.00 0.00 0.00 0.00 00:09:00.633 [2024-10-13T12:05:04.340Z] =================================================================================================================== 00:09:00.633 [2024-10-13T12:05:04.340Z] Total : 25374.00 99.12 0.00 0.00 0.00 0.00 0.00 00:09:00.633 00:09:01.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.573 Nvme0n1 : 10.00 25396.60 99.21 0.00 0.00 0.00 0.00 0.00 00:09:01.573 [2024-10-13T12:05:05.280Z] =================================================================================================================== 00:09:01.573 [2024-10-13T12:05:05.280Z] Total : 25396.60 99.21 0.00 0.00 0.00 0.00 0.00 00:09:01.573 00:09:01.573 00:09:01.573 Latency(us) 00:09:01.573 [2024-10-13T12:05:05.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.573 Nvme0n1 : 10.00 25396.93 99.21 0.00 0.00 5036.32 2531.77 14123.19 00:09:01.573 [2024-10-13T12:05:05.280Z] =================================================================================================================== 00:09:01.573 [2024-10-13T12:05:05.280Z] Total : 25396.93 99.21 0.00 0.00 5036.32 2531.77 14123.19 00:09:01.573 { 00:09:01.573 "results": [ 00:09:01.573 { 00:09:01.573 "job": "Nvme0n1", 00:09:01.573 "core_mask": "0x2", 00:09:01.573 "workload": "randwrite", 00:09:01.573 "status": "finished", 00:09:01.573 "queue_depth": 128, 00:09:01.573 "io_size": 4096, 00:09:01.573 "runtime": 10.00491, 00:09:01.573 "iops": 25396.93010731731, 00:09:01.573 "mibps": 99.20675823170824, 00:09:01.573 "io_failed": 0, 00:09:01.573 "io_timeout": 0, 00:09:01.573 "avg_latency_us": 5036.316246668118, 00:09:01.573 "min_latency_us": 2531.7741396592046, 00:09:01.573 "max_latency_us": 14123.19411961243 00:09:01.573 } 00:09:01.573 ], 00:09:01.573 "core_count": 1 00:09:01.573 } 00:09:01.573 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1514498 00:09:01.573 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1514498 ']' 00:09:01.573 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1514498 00:09:01.573 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:01.573 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:01.573 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1514498 00:09:01.573 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:01.573 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:01.573 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1514498' 00:09:01.573 killing process with pid 1514498 00:09:01.573 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1514498 00:09:01.573 Received shutdown signal, test time was about 10.000000 seconds 00:09:01.573 00:09:01.573 Latency(us) 00:09:01.573 [2024-10-13T12:05:05.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.573 [2024-10-13T12:05:05.281Z] =================================================================================================================== 00:09:01.574 [2024-10-13T12:05:05.281Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:01.574 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1514498 00:09:01.574 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:01.834 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:02.094 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ff90d4b-d2ae-4011-b8fd-b4f6f37699ec 00:09:02.094 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:02.354 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:02.354 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:02.354 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:02.354 [2024-10-13 14:05:05.961467] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:02.354 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ff90d4b-d2ae-4011-b8fd-b4f6f37699ec 00:09:02.354 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:02.354 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ff90d4b-d2ae-4011-b8fd-b4f6f37699ec 00:09:02.354 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.354 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.354 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.354 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.354 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.354 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.354 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.354 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:02.354 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ff90d4b-d2ae-4011-b8fd-b4f6f37699ec 00:09:02.614 request: 00:09:02.615 { 00:09:02.615 "uuid": "8ff90d4b-d2ae-4011-b8fd-b4f6f37699ec", 00:09:02.615 "method": "bdev_lvol_get_lvstores", 00:09:02.615 "req_id": 1 00:09:02.615 } 00:09:02.615 Got JSON-RPC error response 00:09:02.615 response: 00:09:02.615 { 00:09:02.615 "code": -19, 00:09:02.615 "message": "No such device" 00:09:02.615 } 00:09:02.615 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:02.615 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:02.615 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:02.615 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:02.615 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:02.875 aio_bdev 00:09:02.875 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c324f2fd-a5bb-4431-83fc-1a1f9227c17a 00:09:02.875 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=c324f2fd-a5bb-4431-83fc-1a1f9227c17a 00:09:02.875 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:02.875 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:02.875 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:02.875 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:02.875 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:02.875 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c324f2fd-a5bb-4431-83fc-1a1f9227c17a -t 2000 00:09:03.135 [ 00:09:03.135 { 00:09:03.135 "name": "c324f2fd-a5bb-4431-83fc-1a1f9227c17a", 00:09:03.135 "aliases": [ 00:09:03.135 "lvs/lvol" 00:09:03.135 ], 00:09:03.135 "product_name": "Logical Volume", 00:09:03.135 "block_size": 4096, 00:09:03.135 "num_blocks": 38912, 00:09:03.135 "uuid": "c324f2fd-a5bb-4431-83fc-1a1f9227c17a", 00:09:03.135 "assigned_rate_limits": { 00:09:03.135 "rw_ios_per_sec": 0, 00:09:03.135 "rw_mbytes_per_sec": 0, 00:09:03.135 "r_mbytes_per_sec": 0, 00:09:03.135 "w_mbytes_per_sec": 0 00:09:03.135 }, 00:09:03.135 "claimed": false, 00:09:03.135 "zoned": false, 00:09:03.135 "supported_io_types": { 00:09:03.135 "read": true, 00:09:03.135 "write": true, 00:09:03.135 "unmap": true, 00:09:03.135 "flush": false, 00:09:03.135 "reset": true, 00:09:03.135 "nvme_admin": false, 00:09:03.135 "nvme_io": false, 00:09:03.135 "nvme_io_md": false, 00:09:03.135 "write_zeroes": true, 00:09:03.135 "zcopy": false, 00:09:03.135 "get_zone_info": false, 00:09:03.135 "zone_management": false, 00:09:03.136 "zone_append": false, 00:09:03.136 "compare": false, 00:09:03.136 "compare_and_write": false, 00:09:03.136 "abort": false, 00:09:03.136 "seek_hole": true, 00:09:03.136 "seek_data": true, 00:09:03.136 "copy": false, 00:09:03.136 "nvme_iov_md": false 00:09:03.136 }, 00:09:03.136 "driver_specific": { 00:09:03.136 "lvol": { 00:09:03.136 "lvol_store_uuid": "8ff90d4b-d2ae-4011-b8fd-b4f6f37699ec", 00:09:03.136 "base_bdev": "aio_bdev", 00:09:03.136 "thin_provision": false, 00:09:03.136 "num_allocated_clusters": 38, 00:09:03.136 "snapshot": false, 00:09:03.136 "clone": false, 00:09:03.136 "esnap_clone": false 00:09:03.136 } 00:09:03.136 } 00:09:03.136 } 00:09:03.136 ] 00:09:03.136 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:03.136 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ff90d4b-d2ae-4011-b8fd-b4f6f37699ec 00:09:03.136 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:03.136 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:03.136 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ff90d4b-d2ae-4011-b8fd-b4f6f37699ec 00:09:03.136 14:05:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:03.396 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:03.396 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c324f2fd-a5bb-4431-83fc-1a1f9227c17a 00:09:03.656 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8ff90d4b-d2ae-4011-b8fd-b4f6f37699ec 00:09:03.916 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:03.916 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:03.916 00:09:03.916 real 0m15.835s 00:09:03.916 user 0m15.385s 00:09:03.916 sys 0m1.407s 00:09:03.916 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.916 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:03.916 ************************************ 00:09:03.916 END TEST lvs_grow_clean 00:09:03.916 ************************************ 00:09:03.916 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:03.916 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:03.916 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.916 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:04.176 ************************************ 00:09:04.176 START TEST lvs_grow_dirty 00:09:04.176 ************************************ 00:09:04.176 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:04.176 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:04.176 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:04.176 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:04.176 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:04.176 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:04.176 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:04.176 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:04.176 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:04.176 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:04.176 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:04.176 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:04.436 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8fb12253-f84c-40d1-935a-388cd7dca23c 00:09:04.436 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fb12253-f84c-40d1-935a-388cd7dca23c 00:09:04.436 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:04.695 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:04.695 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:04.695 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8fb12253-f84c-40d1-935a-388cd7dca23c lvol 150 00:09:04.695 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=929e7be0-d532-40d6-985e-039430f9f888 00:09:04.695 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:04.695 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:04.956 [2024-10-13 14:05:08.519977] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:04.956 [2024-10-13 14:05:08.520018] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:04.956 true 00:09:04.956 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fb12253-f84c-40d1-935a-388cd7dca23c 00:09:04.956 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:05.333 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:05.333 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:05.333 14:05:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 929e7be0-d532-40d6-985e-039430f9f888 00:09:05.615 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:05.615 [2024-10-13 14:05:09.164330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.615 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:05.875 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1517623 00:09:05.875 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:05.875 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:05.875 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1517623 /var/tmp/bdevperf.sock 00:09:05.875 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1517623 ']' 00:09:05.875 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:05.875 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.875 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:05.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:05.875 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.875 14:05:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:05.875 [2024-10-13 14:05:09.378733] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:09:05.875 [2024-10-13 14:05:09.378782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1517623 ] 00:09:05.875 [2024-10-13 14:05:09.509229] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:05.875 [2024-10-13 14:05:09.555930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.875 [2024-10-13 14:05:09.572343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.815 14:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:06.815 14:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:06.815 14:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:06.815 Nvme0n1 00:09:06.815 14:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:07.075 [ 00:09:07.075 { 00:09:07.075 "name": "Nvme0n1", 00:09:07.075 "aliases": [ 00:09:07.075 "929e7be0-d532-40d6-985e-039430f9f888" 00:09:07.075 ], 00:09:07.075 "product_name": "NVMe disk", 00:09:07.075 "block_size": 4096, 00:09:07.075 "num_blocks": 38912, 00:09:07.075 "uuid": "929e7be0-d532-40d6-985e-039430f9f888", 00:09:07.075 "numa_id": 0, 00:09:07.075 "assigned_rate_limits": { 00:09:07.075 "rw_ios_per_sec": 0, 00:09:07.075 "rw_mbytes_per_sec": 0, 00:09:07.075 "r_mbytes_per_sec": 0, 00:09:07.075 "w_mbytes_per_sec": 0 00:09:07.075 }, 00:09:07.075 "claimed": false, 00:09:07.075 "zoned": false, 00:09:07.075 "supported_io_types": { 00:09:07.075 "read": true, 00:09:07.075 "write": true, 00:09:07.075 "unmap": true, 00:09:07.075 "flush": true, 00:09:07.075 "reset": true, 00:09:07.075 "nvme_admin": true, 00:09:07.075 "nvme_io": true, 00:09:07.075 "nvme_io_md": false, 00:09:07.075 "write_zeroes": true, 00:09:07.075 "zcopy": false, 00:09:07.075 "get_zone_info": false, 00:09:07.075 "zone_management": false, 00:09:07.075 "zone_append": false, 00:09:07.075 "compare": true, 00:09:07.075 "compare_and_write": true, 00:09:07.075 "abort": true, 00:09:07.075 "seek_hole": false, 00:09:07.075 "seek_data": false, 00:09:07.075 "copy": true, 00:09:07.075 "nvme_iov_md": false 00:09:07.075 }, 00:09:07.075 "memory_domains": [ 00:09:07.075 { 00:09:07.075 "dma_device_id": "system", 00:09:07.075 "dma_device_type": 1 00:09:07.075 } 00:09:07.075 ], 00:09:07.075 "driver_specific": { 00:09:07.075 "nvme": [ 00:09:07.075 { 00:09:07.075 "trid": { 00:09:07.075 "trtype": "TCP", 00:09:07.075 "adrfam": "IPv4", 00:09:07.075 "traddr": "10.0.0.2", 00:09:07.075 "trsvcid": "4420", 00:09:07.075 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:07.075 }, 00:09:07.075 "ctrlr_data": { 00:09:07.075 "cntlid": 1, 00:09:07.075 "vendor_id": "0x8086", 00:09:07.075 "model_number": "SPDK bdev Controller", 00:09:07.075 "serial_number": "SPDK0", 00:09:07.075 "firmware_revision": "25.01", 00:09:07.075 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:07.075 "oacs": { 00:09:07.075 "security": 0, 00:09:07.075 "format": 0, 00:09:07.075 "firmware": 0, 00:09:07.075 "ns_manage": 0 00:09:07.075 }, 00:09:07.075 "multi_ctrlr": true, 00:09:07.075 "ana_reporting": false 00:09:07.075 }, 00:09:07.075 "vs": { 00:09:07.075 "nvme_version": "1.3" 00:09:07.075 }, 00:09:07.075 "ns_data": { 00:09:07.075 "id": 1, 00:09:07.075 "can_share": true 00:09:07.075 } 00:09:07.075 } 00:09:07.075 ], 00:09:07.075 "mp_policy": "active_passive" 00:09:07.075 } 00:09:07.075 } 00:09:07.075 ] 00:09:07.075 14:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:07.075 14:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1517853 00:09:07.075 14:05:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:07.075 Running I/O for 10 seconds... 00:09:08.014 Latency(us) 00:09:08.014 [2024-10-13T12:05:11.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.014 Nvme0n1 : 1.00 25048.00 97.84 0.00 0.00 0.00 0.00 0.00 00:09:08.014 [2024-10-13T12:05:11.721Z] =================================================================================================================== 00:09:08.014 [2024-10-13T12:05:11.721Z] Total : 25048.00 97.84 0.00 0.00 0.00 0.00 0.00 00:09:08.014 00:09:08.955 14:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8fb12253-f84c-40d1-935a-388cd7dca23c 00:09:09.214 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.214 Nvme0n1 : 2.00 25204.00 98.45 0.00 0.00 0.00 0.00 0.00 00:09:09.215 [2024-10-13T12:05:12.922Z] =================================================================================================================== 00:09:09.215 [2024-10-13T12:05:12.922Z] Total : 25204.00 98.45 0.00 0.00 0.00 0.00 0.00 00:09:09.215 00:09:09.215 true 00:09:09.215 14:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fb12253-f84c-40d1-935a-388cd7dca23c 00:09:09.215 14:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:09.474 14:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:09.474 14:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:09.474 14:05:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1517853 00:09:10.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.045 Nvme0n1 : 3.00 25286.00 98.77 0.00 0.00 0.00 0.00 0.00 00:09:10.045 [2024-10-13T12:05:13.752Z] =================================================================================================================== 00:09:10.045 [2024-10-13T12:05:13.752Z] Total : 25286.00 98.77 0.00 0.00 0.00 0.00 0.00 00:09:10.045 00:09:10.986 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.986 Nvme0n1 : 4.00 25331.75 98.95 0.00 0.00 0.00 0.00 0.00 00:09:10.986 [2024-10-13T12:05:14.693Z] =================================================================================================================== 00:09:10.986 [2024-10-13T12:05:14.693Z] Total : 25331.75 98.95 0.00 0.00 0.00 0.00 0.00 00:09:10.986 00:09:12.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.369 Nvme0n1 : 5.00 25372.20 99.11 0.00 0.00 0.00 0.00 0.00 00:09:12.369 [2024-10-13T12:05:16.076Z] =================================================================================================================== 00:09:12.369 [2024-10-13T12:05:16.076Z] Total : 25372.20 99.11 0.00 0.00 0.00 0.00 0.00 00:09:12.369 00:09:13.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.311 Nvme0n1 : 6.00 25399.50 99.22 0.00 0.00 0.00 0.00 0.00 00:09:13.311 [2024-10-13T12:05:17.018Z] =================================================================================================================== 00:09:13.311 [2024-10-13T12:05:17.018Z] Total : 25399.50 99.22 0.00 0.00 0.00 0.00 0.00 00:09:13.311 00:09:14.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.254 Nvme0n1 : 7.00 25419.00 99.29 0.00 0.00 0.00 0.00 0.00 00:09:14.254 [2024-10-13T12:05:17.961Z] =================================================================================================================== 00:09:14.254 [2024-10-13T12:05:17.961Z] Total : 25419.00 99.29 0.00 0.00 0.00 0.00 0.00 00:09:14.254 00:09:15.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.195 Nvme0n1 : 8.00 25441.12 99.38 0.00 0.00 0.00 0.00 0.00 00:09:15.195 [2024-10-13T12:05:18.902Z] =================================================================================================================== 00:09:15.195 [2024-10-13T12:05:18.902Z] Total : 25441.12 99.38 0.00 0.00 0.00 0.00 0.00 00:09:15.195 00:09:16.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.136 Nvme0n1 : 9.00 25451.33 99.42 0.00 0.00 0.00 0.00 0.00 00:09:16.136 [2024-10-13T12:05:19.843Z] =================================================================================================================== 00:09:16.136 [2024-10-13T12:05:19.843Z] Total : 25451.33 99.42 0.00 0.00 0.00 0.00 0.00 00:09:16.136 00:09:17.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.078 Nvme0n1 : 10.00 25459.80 99.45 0.00 0.00 0.00 0.00 0.00 00:09:17.078 [2024-10-13T12:05:20.785Z] =================================================================================================================== 00:09:17.078 [2024-10-13T12:05:20.785Z] Total : 25459.80 99.45 0.00 0.00 0.00 0.00 0.00 00:09:17.078 00:09:17.078 00:09:17.078 Latency(us) 00:09:17.078 [2024-10-13T12:05:20.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.078 Nvme0n1 : 10.00 25462.29 99.46 0.00 0.00 5023.82 3120.24 11769.33 00:09:17.078 [2024-10-13T12:05:20.785Z] =================================================================================================================== 00:09:17.078 [2024-10-13T12:05:20.785Z] Total : 25462.29 99.46 0.00 0.00 5023.82 3120.24 11769.33 00:09:17.078 { 00:09:17.078 "results": [ 00:09:17.078 { 00:09:17.078 "job": "Nvme0n1", 00:09:17.078 "core_mask": "0x2", 00:09:17.078 "workload": "randwrite", 00:09:17.078 "status": "finished", 00:09:17.078 "queue_depth": 128, 00:09:17.078 "io_size": 4096, 00:09:17.078 "runtime": 10.004051, 00:09:17.078 "iops": 25462.285228254033, 00:09:17.078 "mibps": 99.46205167286732, 00:09:17.078 "io_failed": 0, 00:09:17.078 "io_timeout": 0, 00:09:17.078 "avg_latency_us": 5023.822321333994, 00:09:17.078 "min_latency_us": 3120.2405613097226, 00:09:17.078 "max_latency_us": 11769.328433010358 00:09:17.078 } 00:09:17.078 ], 00:09:17.078 "core_count": 1 00:09:17.078 } 00:09:17.078 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1517623 00:09:17.078 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1517623 ']' 00:09:17.078 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1517623 00:09:17.078 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:17.078 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:17.078 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1517623 00:09:17.078 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:17.078 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:17.078 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1517623' 00:09:17.078 killing process with pid 1517623 00:09:17.078 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1517623 00:09:17.078 Received shutdown signal, test time was about 10.000000 seconds 00:09:17.078 00:09:17.078 Latency(us) 00:09:17.078 [2024-10-13T12:05:20.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.078 [2024-10-13T12:05:20.785Z] =================================================================================================================== 00:09:17.078 [2024-10-13T12:05:20.785Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:17.078 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1517623 00:09:17.339 14:05:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:17.339 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:17.600 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fb12253-f84c-40d1-935a-388cd7dca23c 00:09:17.600 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:17.861 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:17.861 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:17.861 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1513814 00:09:17.861 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1513814 00:09:17.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1513814 Killed "${NVMF_APP[@]}" "$@" 00:09:17.861 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:17.861 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:17.861 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:17.861 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:17.861 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:17.861 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1519995 00:09:17.861 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1519995 00:09:17.861 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:17.861 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1519995 ']' 00:09:17.861 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.861 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:17.861 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.861 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:17.861 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:17.861 [2024-10-13 14:05:21.498353] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:09:17.861 [2024-10-13 14:05:21.498406] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.122 [2024-10-13 14:05:21.636821] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:18.122 [2024-10-13 14:05:21.682681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.122 [2024-10-13 14:05:21.706184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.122 [2024-10-13 14:05:21.706222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.122 [2024-10-13 14:05:21.706228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.122 [2024-10-13 14:05:21.706233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.122 [2024-10-13 14:05:21.706238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.122 [2024-10-13 14:05:21.706891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.694 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:18.694 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:18.694 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:18.694 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:18.694 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:18.694 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.694 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:18.955 [2024-10-13 14:05:22.481676] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:18.955 [2024-10-13 14:05:22.481785] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:18.955 [2024-10-13 14:05:22.481807] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:18.955 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:18.955 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 929e7be0-d532-40d6-985e-039430f9f888 00:09:18.955 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=929e7be0-d532-40d6-985e-039430f9f888 00:09:18.955 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:18.955 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:18.955 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:18.955 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:18.955 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:19.216 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 929e7be0-d532-40d6-985e-039430f9f888 -t 2000 00:09:19.216 [ 00:09:19.216 { 00:09:19.216 "name": "929e7be0-d532-40d6-985e-039430f9f888", 00:09:19.216 "aliases": [ 00:09:19.216 "lvs/lvol" 00:09:19.216 ], 00:09:19.216 "product_name": "Logical Volume", 00:09:19.216 "block_size": 4096, 00:09:19.216 "num_blocks": 38912, 00:09:19.216 "uuid": "929e7be0-d532-40d6-985e-039430f9f888", 00:09:19.216 "assigned_rate_limits": { 00:09:19.216 "rw_ios_per_sec": 0, 00:09:19.216 "rw_mbytes_per_sec": 0, 00:09:19.216 "r_mbytes_per_sec": 0, 00:09:19.216 "w_mbytes_per_sec": 0 00:09:19.216 }, 00:09:19.216 "claimed": false, 00:09:19.216 "zoned": false, 00:09:19.216 "supported_io_types": { 00:09:19.216 "read": true, 00:09:19.216 "write": true, 00:09:19.216 "unmap": true, 00:09:19.216 "flush": false, 00:09:19.216 "reset": true, 00:09:19.216 "nvme_admin": false, 00:09:19.216 "nvme_io": false, 00:09:19.216 "nvme_io_md": false, 00:09:19.216 "write_zeroes": true, 00:09:19.216 "zcopy": false, 00:09:19.216 "get_zone_info": false, 00:09:19.216 "zone_management": false, 00:09:19.216 "zone_append": false, 00:09:19.216 "compare": false, 00:09:19.216 "compare_and_write": false, 00:09:19.216 "abort": false, 00:09:19.216 "seek_hole": true, 00:09:19.216 "seek_data": true, 00:09:19.216 "copy": false, 00:09:19.216 "nvme_iov_md": false 00:09:19.216 }, 00:09:19.216 "driver_specific": { 00:09:19.216 "lvol": { 00:09:19.216 "lvol_store_uuid": "8fb12253-f84c-40d1-935a-388cd7dca23c", 00:09:19.216 "base_bdev": "aio_bdev", 00:09:19.216 "thin_provision": false, 00:09:19.216 "num_allocated_clusters": 38, 00:09:19.216 "snapshot": false, 00:09:19.216 "clone": false, 00:09:19.216 "esnap_clone": false 00:09:19.216 } 00:09:19.216 } 00:09:19.216 } 00:09:19.216 ] 00:09:19.216 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:19.216 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fb12253-f84c-40d1-935a-388cd7dca23c 00:09:19.216 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:19.477 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:19.477 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fb12253-f84c-40d1-935a-388cd7dca23c 00:09:19.477 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:19.477 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:19.477 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:19.738 [2024-10-13 14:05:23.316336] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:19.738 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fb12253-f84c-40d1-935a-388cd7dca23c 00:09:19.738 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:19.738 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fb12253-f84c-40d1-935a-388cd7dca23c 00:09:19.738 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:19.738 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:19.738 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:19.738 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:19.738 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:19.738 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:19.738 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:19.738 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:19.738 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fb12253-f84c-40d1-935a-388cd7dca23c 00:09:19.999 request: 00:09:19.999 { 00:09:19.999 "uuid": "8fb12253-f84c-40d1-935a-388cd7dca23c", 00:09:19.999 "method": "bdev_lvol_get_lvstores", 00:09:19.999 "req_id": 1 00:09:19.999 } 00:09:19.999 Got JSON-RPC error response 00:09:19.999 response: 00:09:19.999 { 00:09:19.999 "code": -19, 00:09:19.999 "message": "No such device" 00:09:19.999 } 00:09:19.999 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:20.000 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:20.000 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:20.000 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:20.000 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:20.000 aio_bdev 00:09:20.260 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 929e7be0-d532-40d6-985e-039430f9f888 00:09:20.260 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=929e7be0-d532-40d6-985e-039430f9f888 00:09:20.260 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:20.260 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:20.260 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:20.260 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:20.260 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:20.260 14:05:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 929e7be0-d532-40d6-985e-039430f9f888 -t 2000 00:09:20.636 [ 00:09:20.636 { 00:09:20.636 "name": "929e7be0-d532-40d6-985e-039430f9f888", 00:09:20.636 "aliases": [ 00:09:20.636 "lvs/lvol" 00:09:20.636 ], 00:09:20.636 "product_name": "Logical Volume", 00:09:20.636 "block_size": 4096, 00:09:20.636 "num_blocks": 38912, 00:09:20.636 "uuid": "929e7be0-d532-40d6-985e-039430f9f888", 00:09:20.636 "assigned_rate_limits": { 00:09:20.636 "rw_ios_per_sec": 0, 00:09:20.636 "rw_mbytes_per_sec": 0, 00:09:20.636 "r_mbytes_per_sec": 0, 00:09:20.636 "w_mbytes_per_sec": 0 00:09:20.636 }, 00:09:20.636 "claimed": false, 00:09:20.636 "zoned": false, 00:09:20.636 "supported_io_types": { 00:09:20.636 "read": true, 00:09:20.636 "write": true, 00:09:20.636 "unmap": true, 00:09:20.636 "flush": false, 00:09:20.636 "reset": true, 00:09:20.636 "nvme_admin": false, 00:09:20.636 "nvme_io": false, 00:09:20.636 "nvme_io_md": false, 00:09:20.636 "write_zeroes": true, 00:09:20.636 "zcopy": false, 00:09:20.636 "get_zone_info": false, 00:09:20.636 "zone_management": false, 00:09:20.636 "zone_append": false, 00:09:20.636 "compare": false, 00:09:20.636 "compare_and_write": false, 00:09:20.636 "abort": false, 00:09:20.636 "seek_hole": true, 00:09:20.636 "seek_data": true, 00:09:20.636 "copy": false, 00:09:20.636 "nvme_iov_md": false 00:09:20.636 }, 00:09:20.636 "driver_specific": { 00:09:20.636 "lvol": { 00:09:20.636 "lvol_store_uuid": "8fb12253-f84c-40d1-935a-388cd7dca23c", 00:09:20.636 "base_bdev": "aio_bdev", 00:09:20.636 "thin_provision": false, 00:09:20.636 "num_allocated_clusters": 38, 00:09:20.636 "snapshot": false, 00:09:20.636 "clone": false, 00:09:20.636 "esnap_clone": false 00:09:20.636 } 00:09:20.636 } 00:09:20.636 } 00:09:20.636 ] 00:09:20.636 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:20.637 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fb12253-f84c-40d1-935a-388cd7dca23c 00:09:20.637 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:20.637 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:20.637 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fb12253-f84c-40d1-935a-388cd7dca23c 00:09:20.637 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:20.931 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:20.931 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 929e7be0-d532-40d6-985e-039430f9f888 00:09:20.931 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8fb12253-f84c-40d1-935a-388cd7dca23c 00:09:21.191 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:21.191 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:21.191 00:09:21.191 real 0m17.248s 00:09:21.191 user 0m45.214s 00:09:21.191 sys 0m2.944s 00:09:21.191 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.191 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:21.191 ************************************ 00:09:21.191 END TEST lvs_grow_dirty 00:09:21.191 ************************************ 00:09:21.451 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:21.451 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:21.451 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:21.451 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:21.451 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:21.451 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:21.451 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:21.451 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:21.451 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:21.451 nvmf_trace.0 00:09:21.451 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:21.451 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:21.451 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:21.451 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:21.451 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:21.451 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:21.451 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:21.451 14:05:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:21.451 rmmod nvme_tcp 00:09:21.451 rmmod nvme_fabrics 00:09:21.451 rmmod nvme_keyring 00:09:21.451 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:21.451 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:21.451 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:21.451 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1519995 ']' 00:09:21.451 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1519995 00:09:21.451 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1519995 ']' 00:09:21.451 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1519995 00:09:21.451 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:21.451 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.451 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1519995 00:09:21.452 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:21.452 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:21.452 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1519995' 00:09:21.452 killing process with pid 1519995 00:09:21.452 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1519995 00:09:21.452 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1519995 00:09:21.715 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:21.715 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:21.715 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:21.715 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:21.715 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:09:21.715 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:21.715 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:09:21.715 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:21.715 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:21.715 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.715 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.715 14:05:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.626 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:23.626 00:09:23.626 real 0m44.649s 00:09:23.626 user 1m6.860s 00:09:23.626 sys 0m10.609s 00:09:23.626 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.626 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:23.626 ************************************ 00:09:23.626 END TEST nvmf_lvs_grow 00:09:23.626 ************************************ 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:23.887 ************************************ 00:09:23.887 START TEST nvmf_bdev_io_wait 00:09:23.887 ************************************ 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:23.887 * Looking for test storage... 00:09:23.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:23.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.887 --rc genhtml_branch_coverage=1 00:09:23.887 --rc genhtml_function_coverage=1 00:09:23.887 --rc genhtml_legend=1 00:09:23.887 --rc geninfo_all_blocks=1 00:09:23.887 --rc geninfo_unexecuted_blocks=1 00:09:23.887 00:09:23.887 ' 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:23.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.887 --rc genhtml_branch_coverage=1 00:09:23.887 --rc genhtml_function_coverage=1 00:09:23.887 --rc genhtml_legend=1 00:09:23.887 --rc geninfo_all_blocks=1 00:09:23.887 --rc geninfo_unexecuted_blocks=1 00:09:23.887 00:09:23.887 ' 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:23.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.887 --rc genhtml_branch_coverage=1 00:09:23.887 --rc genhtml_function_coverage=1 00:09:23.887 --rc genhtml_legend=1 00:09:23.887 --rc geninfo_all_blocks=1 00:09:23.887 --rc geninfo_unexecuted_blocks=1 00:09:23.887 00:09:23.887 ' 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:23.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.887 --rc genhtml_branch_coverage=1 00:09:23.887 --rc genhtml_function_coverage=1 00:09:23.887 --rc genhtml_legend=1 00:09:23.887 --rc geninfo_all_blocks=1 00:09:23.887 --rc geninfo_unexecuted_blocks=1 00:09:23.887 00:09:23.887 ' 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.887 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.888 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.888 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.149 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:24.149 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:24.149 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.149 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:24.150 14:05:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:32.291 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:32.291 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:32.291 Found net devices under 0000:31:00.0: cvl_0_0 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:32.291 Found net devices under 0000:31:00.1: cvl_0_1 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.291 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:32.292 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:32.292 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:32.292 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:32.292 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:32.292 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:32.292 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:32.292 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.292 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:32.292 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:32.292 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:32.292 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:32.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:09:32.292 00:09:32.292 --- 10.0.0.2 ping statistics --- 00:09:32.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.292 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:32.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:09:32.292 00:09:32.292 --- 10.0.0.1 ping statistics --- 00:09:32.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.292 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1525131 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1525131 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1525131 ']' 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:32.292 14:05:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.292 [2024-10-13 14:05:35.392367] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:09:32.292 [2024-10-13 14:05:35.392429] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.292 [2024-10-13 14:05:35.533926] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:32.292 [2024-10-13 14:05:35.581764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.292 [2024-10-13 14:05:35.611180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.292 [2024-10-13 14:05:35.611222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.292 [2024-10-13 14:05:35.611230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.292 [2024-10-13 14:05:35.611237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.292 [2024-10-13 14:05:35.611243] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.292 [2024-10-13 14:05:35.613161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.292 [2024-10-13 14:05:35.613476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.292 [2024-10-13 14:05:35.613611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.292 [2024-10-13 14:05:35.613612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.553 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:32.553 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:32.553 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:32.553 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:32.553 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.815 [2024-10-13 14:05:36.346879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.815 Malloc0 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.815 [2024-10-13 14:05:36.412831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1525298 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1525301 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:32.815 { 00:09:32.815 "params": { 00:09:32.815 "name": "Nvme$subsystem", 00:09:32.815 "trtype": "$TEST_TRANSPORT", 00:09:32.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.815 "adrfam": "ipv4", 00:09:32.815 "trsvcid": "$NVMF_PORT", 00:09:32.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.815 "hdgst": ${hdgst:-false}, 00:09:32.815 "ddgst": ${ddgst:-false} 00:09:32.815 }, 00:09:32.815 "method": "bdev_nvme_attach_controller" 00:09:32.815 } 00:09:32.815 EOF 00:09:32.815 )") 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1525303 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:32.815 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:32.815 { 00:09:32.815 "params": { 00:09:32.815 "name": "Nvme$subsystem", 00:09:32.815 "trtype": "$TEST_TRANSPORT", 00:09:32.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.815 "adrfam": "ipv4", 00:09:32.815 "trsvcid": "$NVMF_PORT", 00:09:32.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.816 "hdgst": ${hdgst:-false}, 00:09:32.816 "ddgst": ${ddgst:-false} 00:09:32.816 }, 00:09:32.816 "method": "bdev_nvme_attach_controller" 00:09:32.816 } 00:09:32.816 EOF 00:09:32.816 )") 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1525308 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:32.816 { 00:09:32.816 "params": { 00:09:32.816 "name": "Nvme$subsystem", 00:09:32.816 "trtype": "$TEST_TRANSPORT", 00:09:32.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.816 "adrfam": "ipv4", 00:09:32.816 "trsvcid": "$NVMF_PORT", 00:09:32.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.816 "hdgst": ${hdgst:-false}, 00:09:32.816 "ddgst": ${ddgst:-false} 00:09:32.816 }, 00:09:32.816 "method": "bdev_nvme_attach_controller" 00:09:32.816 } 00:09:32.816 EOF 00:09:32.816 )") 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:32.816 { 00:09:32.816 "params": { 00:09:32.816 "name": "Nvme$subsystem", 00:09:32.816 "trtype": "$TEST_TRANSPORT", 00:09:32.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.816 "adrfam": "ipv4", 00:09:32.816 "trsvcid": "$NVMF_PORT", 00:09:32.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.816 "hdgst": ${hdgst:-false}, 00:09:32.816 "ddgst": ${ddgst:-false} 00:09:32.816 }, 00:09:32.816 "method": "bdev_nvme_attach_controller" 00:09:32.816 } 00:09:32.816 EOF 00:09:32.816 )") 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1525298 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:32.816 "params": { 00:09:32.816 "name": "Nvme1", 00:09:32.816 "trtype": "tcp", 00:09:32.816 "traddr": "10.0.0.2", 00:09:32.816 "adrfam": "ipv4", 00:09:32.816 "trsvcid": "4420", 00:09:32.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.816 "hdgst": false, 00:09:32.816 "ddgst": false 00:09:32.816 }, 00:09:32.816 "method": "bdev_nvme_attach_controller" 00:09:32.816 }' 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:32.816 "params": { 00:09:32.816 "name": "Nvme1", 00:09:32.816 "trtype": "tcp", 00:09:32.816 "traddr": "10.0.0.2", 00:09:32.816 "adrfam": "ipv4", 00:09:32.816 "trsvcid": "4420", 00:09:32.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.816 "hdgst": false, 00:09:32.816 "ddgst": false 00:09:32.816 }, 00:09:32.816 "method": "bdev_nvme_attach_controller" 00:09:32.816 }' 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:32.816 "params": { 00:09:32.816 "name": "Nvme1", 00:09:32.816 "trtype": "tcp", 00:09:32.816 "traddr": "10.0.0.2", 00:09:32.816 "adrfam": "ipv4", 00:09:32.816 "trsvcid": "4420", 00:09:32.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.816 "hdgst": false, 00:09:32.816 "ddgst": false 00:09:32.816 }, 00:09:32.816 "method": "bdev_nvme_attach_controller" 00:09:32.816 }' 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:32.816 14:05:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:32.816 "params": { 00:09:32.816 "name": "Nvme1", 00:09:32.816 "trtype": "tcp", 00:09:32.816 "traddr": "10.0.0.2", 00:09:32.816 "adrfam": "ipv4", 00:09:32.816 "trsvcid": "4420", 00:09:32.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.816 "hdgst": false, 00:09:32.816 "ddgst": false 00:09:32.816 }, 00:09:32.816 "method": "bdev_nvme_attach_controller" 00:09:32.816 }' 00:09:32.816 [2024-10-13 14:05:36.476527] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:09:32.816 [2024-10-13 14:05:36.476589] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:32.816 [2024-10-13 14:05:36.479353] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:09:32.816 [2024-10-13 14:05:36.479435] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:32.816 [2024-10-13 14:05:36.482652] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:09:32.816 [2024-10-13 14:05:36.482715] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:32.816 [2024-10-13 14:05:36.485054] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:09:32.816 [2024-10-13 14:05:36.485128] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:33.078 [2024-10-13 14:05:36.749729] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:33.339 [2024-10-13 14:05:36.801269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.339 [2024-10-13 14:05:36.814877] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:33.339 [2024-10-13 14:05:36.818926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:33.339 [2024-10-13 14:05:36.864195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.339 [2024-10-13 14:05:36.881123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:33.339 [2024-10-13 14:05:36.906455] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:33.339 [2024-10-13 14:05:36.952505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.339 [2024-10-13 14:05:36.971208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:33.339 [2024-10-13 14:05:36.973417] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:33.339 Running I/O for 1 seconds... 00:09:33.339 [2024-10-13 14:05:37.024692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.339 Running I/O for 1 seconds... 00:09:33.339 [2024-10-13 14:05:37.040662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:33.600 Running I/O for 1 seconds... 00:09:33.600 Running I/O for 1 seconds... 00:09:34.541 11511.00 IOPS, 44.96 MiB/s 00:09:34.541 Latency(us) 00:09:34.541 [2024-10-13T12:05:38.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.541 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:34.541 Nvme1n1 : 1.01 11567.88 45.19 0.00 0.00 11025.74 5884.66 17188.69 00:09:34.541 [2024-10-13T12:05:38.248Z] =================================================================================================================== 00:09:34.541 [2024-10-13T12:05:38.248Z] Total : 11567.88 45.19 0.00 0.00 11025.74 5884.66 17188.69 00:09:34.541 9506.00 IOPS, 37.13 MiB/s 00:09:34.541 Latency(us) 00:09:34.541 [2024-10-13T12:05:38.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.541 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:34.541 Nvme1n1 : 1.01 9576.41 37.41 0.00 0.00 13315.56 5994.15 22005.91 00:09:34.541 [2024-10-13T12:05:38.248Z] =================================================================================================================== 00:09:34.541 [2024-10-13T12:05:38.248Z] Total : 9576.41 37.41 0.00 0.00 13315.56 5994.15 22005.91 00:09:34.541 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1525301 00:09:34.541 9942.00 IOPS, 38.84 MiB/s 00:09:34.541 Latency(us) 00:09:34.541 [2024-10-13T12:05:38.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.541 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:34.541 Nvme1n1 : 1.01 10002.93 39.07 0.00 0.00 12750.57 5391.99 23319.69 00:09:34.541 [2024-10-13T12:05:38.248Z] =================================================================================================================== 00:09:34.541 [2024-10-13T12:05:38.248Z] Total : 10002.93 39.07 0.00 0.00 12750.57 5391.99 23319.69 00:09:34.803 187680.00 IOPS, 733.12 MiB/s 00:09:34.803 Latency(us) 00:09:34.803 [2024-10-13T12:05:38.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.803 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:34.803 Nvme1n1 : 1.00 187306.08 731.66 0.00 0.00 679.72 309.63 1984.36 00:09:34.803 [2024-10-13T12:05:38.510Z] =================================================================================================================== 00:09:34.803 [2024-10-13T12:05:38.510Z] Total : 187306.08 731.66 0.00 0.00 679.72 309.63 1984.36 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1525303 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1525308 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.803 rmmod nvme_tcp 00:09:34.803 rmmod nvme_fabrics 00:09:34.803 rmmod nvme_keyring 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1525131 ']' 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1525131 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1525131 ']' 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1525131 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:34.803 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1525131 00:09:35.064 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:35.064 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:35.064 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1525131' 00:09:35.064 killing process with pid 1525131 00:09:35.064 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1525131 00:09:35.064 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1525131 00:09:35.064 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:35.064 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:35.064 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:35.064 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:35.064 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:09:35.064 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:35.064 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:09:35.064 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:35.064 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:35.064 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.064 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.064 14:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.610 14:05:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:37.610 00:09:37.610 real 0m13.392s 00:09:37.610 user 0m19.476s 00:09:37.610 sys 0m7.638s 00:09:37.610 14:05:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.610 14:05:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.610 ************************************ 00:09:37.610 END TEST nvmf_bdev_io_wait 00:09:37.610 ************************************ 00:09:37.610 14:05:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:37.610 14:05:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:37.610 14:05:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.610 14:05:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:37.610 ************************************ 00:09:37.610 START TEST nvmf_queue_depth 00:09:37.610 ************************************ 00:09:37.610 14:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:37.610 * Looking for test storage... 00:09:37.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.610 14:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:37.610 14:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:09:37.610 14:05:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:37.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.610 --rc genhtml_branch_coverage=1 00:09:37.610 --rc genhtml_function_coverage=1 00:09:37.610 --rc genhtml_legend=1 00:09:37.610 --rc geninfo_all_blocks=1 00:09:37.610 --rc geninfo_unexecuted_blocks=1 00:09:37.610 00:09:37.610 ' 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:37.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.610 --rc genhtml_branch_coverage=1 00:09:37.610 --rc genhtml_function_coverage=1 00:09:37.610 --rc genhtml_legend=1 00:09:37.610 --rc geninfo_all_blocks=1 00:09:37.610 --rc geninfo_unexecuted_blocks=1 00:09:37.610 00:09:37.610 ' 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:37.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.610 --rc genhtml_branch_coverage=1 00:09:37.610 --rc genhtml_function_coverage=1 00:09:37.610 --rc genhtml_legend=1 00:09:37.610 --rc geninfo_all_blocks=1 00:09:37.610 --rc geninfo_unexecuted_blocks=1 00:09:37.610 00:09:37.610 ' 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:37.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.610 --rc genhtml_branch_coverage=1 00:09:37.610 --rc genhtml_function_coverage=1 00:09:37.610 --rc genhtml_legend=1 00:09:37.610 --rc geninfo_all_blocks=1 00:09:37.610 --rc geninfo_unexecuted_blocks=1 00:09:37.610 00:09:37.610 ' 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.610 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:37.611 14:05:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:45.750 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:45.750 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:45.750 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:45.751 Found net devices under 0000:31:00.0: cvl_0_0 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:45.751 Found net devices under 0000:31:00.1: cvl_0_1 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:45.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:09:45.751 00:09:45.751 --- 10.0.0.2 ping statistics --- 00:09:45.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.751 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:09:45.751 00:09:45.751 --- 10.0.0.1 ping statistics --- 00:09:45.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.751 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1530055 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1530055 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1530055 ']' 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:45.751 14:05:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.751 [2024-10-13 14:05:48.838700] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:09:45.751 [2024-10-13 14:05:48.838769] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.751 [2024-10-13 14:05:48.983933] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:45.751 [2024-10-13 14:05:49.034290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.751 [2024-10-13 14:05:49.060413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.751 [2024-10-13 14:05:49.060459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.751 [2024-10-13 14:05:49.060473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.751 [2024-10-13 14:05:49.060480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.751 [2024-10-13 14:05:49.060487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.751 [2024-10-13 14:05:49.061259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.012 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.012 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:46.012 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:46.012 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:46.012 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.012 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.012 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:46.012 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.012 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.273 [2024-10-13 14:05:49.720725] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.273 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.273 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:46.273 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.273 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.273 Malloc0 00:09:46.273 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.273 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:46.273 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.273 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.273 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.273 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:46.273 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.273 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.273 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.273 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.273 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.273 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.273 [2024-10-13 14:05:49.781971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.274 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.274 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1530288 00:09:46.274 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:46.274 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1530288 /var/tmp/bdevperf.sock 00:09:46.274 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:46.274 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1530288 ']' 00:09:46.274 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:46.274 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:46.274 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:46.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:46.274 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:46.274 14:05:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.274 [2024-10-13 14:05:49.839683] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:09:46.274 [2024-10-13 14:05:49.839754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1530288 ] 00:09:46.274 [2024-10-13 14:05:49.974808] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:46.535 [2024-10-13 14:05:50.024031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.535 [2024-10-13 14:05:50.057279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.106 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:47.107 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:47.107 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:47.107 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.107 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.367 NVMe0n1 00:09:47.367 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.367 14:05:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:47.367 Running I/O for 10 seconds... 00:09:49.690 9216.00 IOPS, 36.00 MiB/s [2024-10-13T12:05:54.336Z] 10253.00 IOPS, 40.05 MiB/s [2024-10-13T12:05:55.276Z] 10774.67 IOPS, 42.09 MiB/s [2024-10-13T12:05:56.216Z] 11256.50 IOPS, 43.97 MiB/s [2024-10-13T12:05:57.156Z] 11676.60 IOPS, 45.61 MiB/s [2024-10-13T12:05:58.095Z] 12009.00 IOPS, 46.91 MiB/s [2024-10-13T12:05:59.037Z] 12236.57 IOPS, 47.80 MiB/s [2024-10-13T12:06:00.419Z] 12403.88 IOPS, 48.45 MiB/s [2024-10-13T12:06:01.361Z] 12517.67 IOPS, 48.90 MiB/s [2024-10-13T12:06:01.361Z] 12655.20 IOPS, 49.43 MiB/s 00:09:57.654 Latency(us) 00:09:57.654 [2024-10-13T12:06:01.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.654 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:57.654 Verification LBA range: start 0x0 length 0x4000 00:09:57.654 NVMe0n1 : 10.06 12677.59 49.52 0.00 0.00 80467.30 21458.50 71820.27 00:09:57.654 [2024-10-13T12:06:01.361Z] =================================================================================================================== 00:09:57.654 [2024-10-13T12:06:01.361Z] Total : 12677.59 49.52 0.00 0.00 80467.30 21458.50 71820.27 00:09:57.654 { 00:09:57.654 "results": [ 00:09:57.654 { 00:09:57.654 "job": "NVMe0n1", 00:09:57.654 "core_mask": "0x1", 00:09:57.654 "workload": "verify", 00:09:57.654 "status": "finished", 00:09:57.654 "verify_range": { 00:09:57.654 "start": 0, 00:09:57.654 "length": 16384 00:09:57.654 }, 00:09:57.654 "queue_depth": 1024, 00:09:57.654 "io_size": 4096, 00:09:57.654 "runtime": 10.06114, 00:09:57.654 "iops": 12677.589219511905, 00:09:57.654 "mibps": 49.52183288871838, 00:09:57.654 "io_failed": 0, 00:09:57.654 "io_timeout": 0, 00:09:57.654 "avg_latency_us": 80467.29882412056, 00:09:57.654 "min_latency_us": 21458.496491814232, 00:09:57.654 "max_latency_us": 71820.27397260274 00:09:57.654 } 00:09:57.654 ], 00:09:57.654 "core_count": 1 00:09:57.654 } 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1530288 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1530288 ']' 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1530288 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1530288 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1530288' 00:09:57.654 killing process with pid 1530288 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1530288 00:09:57.654 Received shutdown signal, test time was about 10.000000 seconds 00:09:57.654 00:09:57.654 Latency(us) 00:09:57.654 [2024-10-13T12:06:01.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.654 [2024-10-13T12:06:01.361Z] =================================================================================================================== 00:09:57.654 [2024-10-13T12:06:01.361Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1530288 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:57.654 rmmod nvme_tcp 00:09:57.654 rmmod nvme_fabrics 00:09:57.654 rmmod nvme_keyring 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1530055 ']' 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1530055 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1530055 ']' 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1530055 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:57.654 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1530055 00:09:57.915 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:57.915 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:57.915 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1530055' 00:09:57.915 killing process with pid 1530055 00:09:57.915 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1530055 00:09:57.915 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1530055 00:09:57.915 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:57.915 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:57.915 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:57.915 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:57.915 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:09:57.915 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:57.915 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:09:57.915 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:57.915 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:57.915 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.915 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.915 14:06:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.458 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:00.458 00:10:00.458 real 0m22.743s 00:10:00.458 user 0m25.811s 00:10:00.458 sys 0m7.146s 00:10:00.458 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:00.458 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.458 ************************************ 00:10:00.458 END TEST nvmf_queue_depth 00:10:00.458 ************************************ 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.459 ************************************ 00:10:00.459 START TEST nvmf_target_multipath 00:10:00.459 ************************************ 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:00.459 * Looking for test storage... 00:10:00.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:00.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.459 --rc genhtml_branch_coverage=1 00:10:00.459 --rc genhtml_function_coverage=1 00:10:00.459 --rc genhtml_legend=1 00:10:00.459 --rc geninfo_all_blocks=1 00:10:00.459 --rc geninfo_unexecuted_blocks=1 00:10:00.459 00:10:00.459 ' 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:00.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.459 --rc genhtml_branch_coverage=1 00:10:00.459 --rc genhtml_function_coverage=1 00:10:00.459 --rc genhtml_legend=1 00:10:00.459 --rc geninfo_all_blocks=1 00:10:00.459 --rc geninfo_unexecuted_blocks=1 00:10:00.459 00:10:00.459 ' 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:00.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.459 --rc genhtml_branch_coverage=1 00:10:00.459 --rc genhtml_function_coverage=1 00:10:00.459 --rc genhtml_legend=1 00:10:00.459 --rc geninfo_all_blocks=1 00:10:00.459 --rc geninfo_unexecuted_blocks=1 00:10:00.459 00:10:00.459 ' 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:00.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.459 --rc genhtml_branch_coverage=1 00:10:00.459 --rc genhtml_function_coverage=1 00:10:00.459 --rc genhtml_legend=1 00:10:00.459 --rc geninfo_all_blocks=1 00:10:00.459 --rc geninfo_unexecuted_blocks=1 00:10:00.459 00:10:00.459 ' 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:00.459 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:00.460 14:06:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:08.602 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:08.602 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.602 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:08.602 Found net devices under 0000:31:00.0: cvl_0_0 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:08.603 Found net devices under 0000:31:00.1: cvl_0_1 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:08.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:10:08.603 00:10:08.603 --- 10.0.0.2 ping statistics --- 00:10:08.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.603 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:10:08.603 00:10:08.603 --- 10.0.0.1 ping statistics --- 00:10:08.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.603 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:08.603 only one NIC for nvmf test 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.603 rmmod nvme_tcp 00:10:08.603 rmmod nvme_fabrics 00:10:08.603 rmmod nvme_keyring 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.603 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.518 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.519 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.519 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.519 00:10:10.519 real 0m10.163s 00:10:10.519 user 0m2.231s 00:10:10.519 sys 0m5.848s 00:10:10.519 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.519 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:10.519 ************************************ 00:10:10.519 END TEST nvmf_target_multipath 00:10:10.519 ************************************ 00:10:10.519 14:06:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:10.519 14:06:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:10.519 14:06:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.519 14:06:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.519 ************************************ 00:10:10.519 START TEST nvmf_zcopy 00:10:10.519 ************************************ 00:10:10.519 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:10.519 * Looking for test storage... 00:10:10.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:10.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.519 --rc genhtml_branch_coverage=1 00:10:10.519 --rc genhtml_function_coverage=1 00:10:10.519 --rc genhtml_legend=1 00:10:10.519 --rc geninfo_all_blocks=1 00:10:10.519 --rc geninfo_unexecuted_blocks=1 00:10:10.519 00:10:10.519 ' 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:10.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.519 --rc genhtml_branch_coverage=1 00:10:10.519 --rc genhtml_function_coverage=1 00:10:10.519 --rc genhtml_legend=1 00:10:10.519 --rc geninfo_all_blocks=1 00:10:10.519 --rc geninfo_unexecuted_blocks=1 00:10:10.519 00:10:10.519 ' 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:10.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.519 --rc genhtml_branch_coverage=1 00:10:10.519 --rc genhtml_function_coverage=1 00:10:10.519 --rc genhtml_legend=1 00:10:10.519 --rc geninfo_all_blocks=1 00:10:10.519 --rc geninfo_unexecuted_blocks=1 00:10:10.519 00:10:10.519 ' 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:10.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.519 --rc genhtml_branch_coverage=1 00:10:10.519 --rc genhtml_function_coverage=1 00:10:10.519 --rc genhtml_legend=1 00:10:10.519 --rc geninfo_all_blocks=1 00:10:10.519 --rc geninfo_unexecuted_blocks=1 00:10:10.519 00:10:10.519 ' 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.519 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.520 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.520 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.520 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.520 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:10.520 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:10.520 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.520 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:10.520 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:10.520 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:10.520 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.520 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.520 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.520 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:10.520 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:10.520 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:10.520 14:06:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:18.665 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:18.665 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:18.665 Found net devices under 0000:31:00.0: cvl_0_0 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:18.665 Found net devices under 0000:31:00.1: cvl_0_1 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.665 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:18.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:10:18.666 00:10:18.666 --- 10.0.0.2 ping statistics --- 00:10:18.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.666 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:18.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:10:18.666 00:10:18.666 --- 10.0.0.1 ping statistics --- 00:10:18.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.666 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1541745 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1541745 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1541745 ']' 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:18.666 14:06:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.666 [2024-10-13 14:06:21.952138] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:10:18.666 [2024-10-13 14:06:21.952208] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.666 [2024-10-13 14:06:22.094603] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:18.666 [2024-10-13 14:06:22.142068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.666 [2024-10-13 14:06:22.168847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.666 [2024-10-13 14:06:22.168888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.666 [2024-10-13 14:06:22.168896] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.666 [2024-10-13 14:06:22.168903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.666 [2024-10-13 14:06:22.168909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.666 [2024-10-13 14:06:22.169662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.237 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:19.237 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:19.237 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:19.237 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:19.237 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.237 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.237 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:19.237 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:19.237 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.237 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.237 [2024-10-13 14:06:22.821507] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.237 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.237 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:19.237 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.237 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.237 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.237 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.237 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.237 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.237 [2024-10-13 14:06:22.845715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.238 malloc0 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:19.238 { 00:10:19.238 "params": { 00:10:19.238 "name": "Nvme$subsystem", 00:10:19.238 "trtype": "$TEST_TRANSPORT", 00:10:19.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:19.238 "adrfam": "ipv4", 00:10:19.238 "trsvcid": "$NVMF_PORT", 00:10:19.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:19.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:19.238 "hdgst": ${hdgst:-false}, 00:10:19.238 "ddgst": ${ddgst:-false} 00:10:19.238 }, 00:10:19.238 "method": "bdev_nvme_attach_controller" 00:10:19.238 } 00:10:19.238 EOF 00:10:19.238 )") 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:10:19.238 14:06:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:19.238 "params": { 00:10:19.238 "name": "Nvme1", 00:10:19.238 "trtype": "tcp", 00:10:19.238 "traddr": "10.0.0.2", 00:10:19.238 "adrfam": "ipv4", 00:10:19.238 "trsvcid": "4420", 00:10:19.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:19.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:19.238 "hdgst": false, 00:10:19.238 "ddgst": false 00:10:19.238 }, 00:10:19.238 "method": "bdev_nvme_attach_controller" 00:10:19.238 }' 00:10:19.499 [2024-10-13 14:06:22.947852] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:10:19.499 [2024-10-13 14:06:22.947922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542007 ] 00:10:19.499 [2024-10-13 14:06:23.082216] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:19.499 [2024-10-13 14:06:23.133487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.499 [2024-10-13 14:06:23.161614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.759 Running I/O for 10 seconds... 00:10:21.641 6504.00 IOPS, 50.81 MiB/s [2024-10-13T12:06:26.731Z] 7660.00 IOPS, 59.84 MiB/s [2024-10-13T12:06:27.671Z] 8377.00 IOPS, 65.45 MiB/s [2024-10-13T12:06:28.610Z] 8739.75 IOPS, 68.28 MiB/s [2024-10-13T12:06:29.549Z] 8958.60 IOPS, 69.99 MiB/s [2024-10-13T12:06:30.487Z] 9100.67 IOPS, 71.10 MiB/s [2024-10-13T12:06:31.428Z] 9197.71 IOPS, 71.86 MiB/s [2024-10-13T12:06:32.367Z] 9274.12 IOPS, 72.45 MiB/s [2024-10-13T12:06:33.751Z] 9330.00 IOPS, 72.89 MiB/s [2024-10-13T12:06:33.751Z] 9378.90 IOPS, 73.27 MiB/s 00:10:30.044 Latency(us) 00:10:30.044 [2024-10-13T12:06:33.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.044 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:30.044 Verification LBA range: start 0x0 length 0x1000 00:10:30.044 Nvme1n1 : 10.01 9379.28 73.28 0.00 0.00 13601.03 533.73 28246.39 00:10:30.044 [2024-10-13T12:06:33.751Z] =================================================================================================================== 00:10:30.044 [2024-10-13T12:06:33.751Z] Total : 9379.28 73.28 0.00 0.00 13601.03 533.73 28246.39 00:10:30.044 14:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1544022 00:10:30.044 14:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:30.044 14:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:30.044 14:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:30.044 14:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:30.044 14:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:10:30.044 14:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:10:30.044 14:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:30.044 14:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:30.044 { 00:10:30.044 "params": { 00:10:30.044 "name": "Nvme$subsystem", 00:10:30.044 "trtype": "$TEST_TRANSPORT", 00:10:30.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:30.044 "adrfam": "ipv4", 00:10:30.044 "trsvcid": "$NVMF_PORT", 00:10:30.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:30.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:30.044 "hdgst": ${hdgst:-false}, 00:10:30.044 "ddgst": ${ddgst:-false} 00:10:30.044 }, 00:10:30.044 "method": "bdev_nvme_attach_controller" 00:10:30.044 } 00:10:30.044 EOF 00:10:30.044 )") 00:10:30.044 14:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:10:30.044 [2024-10-13 14:06:33.428184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.428210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 14:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:10:30.044 14:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:10:30.044 14:06:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:30.044 "params": { 00:10:30.044 "name": "Nvme1", 00:10:30.044 "trtype": "tcp", 00:10:30.044 "traddr": "10.0.0.2", 00:10:30.044 "adrfam": "ipv4", 00:10:30.044 "trsvcid": "4420", 00:10:30.044 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:30.044 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:30.044 "hdgst": false, 00:10:30.044 "ddgst": false 00:10:30.044 }, 00:10:30.044 "method": "bdev_nvme_attach_controller" 00:10:30.044 }' 00:10:30.044 [2024-10-13 14:06:33.440156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.440165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.452156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.452164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.464159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.464167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.472849] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:10:30.044 [2024-10-13 14:06:33.472896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1544022 ] 00:10:30.044 [2024-10-13 14:06:33.476161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.476168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.488163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.488170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.500169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.500176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.512170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.512181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.524173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.524180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.536175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.536182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.548179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.548186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.560182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.560189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.572185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.572192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.584188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.584196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.596192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.596199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.603405] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:30.044 [2024-10-13 14:06:33.608193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.608201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.620196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.620203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.632198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.632205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.644200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.644207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.650086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.044 [2024-10-13 14:06:33.656206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.656215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.665885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.044 [2024-10-13 14:06:33.668206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.668215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.680216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.680225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.692216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.692228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.704217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.704228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.716218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.716226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.728223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.728232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.044 [2024-10-13 14:06:33.740235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.044 [2024-10-13 14:06:33.740252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:33.752233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.752244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:33.764238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.764249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:33.776241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.776252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:33.788242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.788249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:33.800250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.800264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 Running I/O for 5 seconds... 00:10:30.305 [2024-10-13 14:06:33.812252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.812264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:33.827583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.827599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:33.840612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.840629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:33.853987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.854003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:33.867111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.867127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:33.880256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.880272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:33.893377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.893393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:33.906757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.906772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:33.919743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.919759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:33.933276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.933291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:33.945382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.945397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:33.958642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.958662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:33.971696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.971711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:33.984974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.984990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:33.997760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:33.997776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.305 [2024-10-13 14:06:34.010959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.305 [2024-10-13 14:06:34.010974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.024521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.024536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.037228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.037243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.050184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.050200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.063325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.063340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.076407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.076422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.089638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.089653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.102097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.102113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.114253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.114268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.127092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.127107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.140432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.140447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.153652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.153668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.166997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.167013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.180231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.180247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.192785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.192800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.206459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.206478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.219179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.219193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.232539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.232555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.245424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.245439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.258781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.258797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.566 [2024-10-13 14:06:34.271570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.566 [2024-10-13 14:06:34.271586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-10-13 14:06:34.285079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-10-13 14:06:34.285095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-10-13 14:06:34.298335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-10-13 14:06:34.298350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-10-13 14:06:34.311431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-10-13 14:06:34.311446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-10-13 14:06:34.324408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-10-13 14:06:34.324422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-10-13 14:06:34.337488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-10-13 14:06:34.337503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.826 [2024-10-13 14:06:34.350531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.826 [2024-10-13 14:06:34.350546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.827 [2024-10-13 14:06:34.363534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.827 [2024-10-13 14:06:34.363549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.827 [2024-10-13 14:06:34.376768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.827 [2024-10-13 14:06:34.376783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.827 [2024-10-13 14:06:34.389735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.827 [2024-10-13 14:06:34.389751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.827 [2024-10-13 14:06:34.403240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.827 [2024-10-13 14:06:34.403255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.827 [2024-10-13 14:06:34.415603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.827 [2024-10-13 14:06:34.415617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.827 [2024-10-13 14:06:34.428575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.827 [2024-10-13 14:06:34.428590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.827 [2024-10-13 14:06:34.441416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.827 [2024-10-13 14:06:34.441430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.827 [2024-10-13 14:06:34.454541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.827 [2024-10-13 14:06:34.454563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.827 [2024-10-13 14:06:34.467180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.827 [2024-10-13 14:06:34.467195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.827 [2024-10-13 14:06:34.480438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.827 [2024-10-13 14:06:34.480452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.827 [2024-10-13 14:06:34.493749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.827 [2024-10-13 14:06:34.493764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.827 [2024-10-13 14:06:34.506626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.827 [2024-10-13 14:06:34.506642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.827 [2024-10-13 14:06:34.519724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.827 [2024-10-13 14:06:34.519738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.827 [2024-10-13 14:06:34.532956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.827 [2024-10-13 14:06:34.532971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.087 [2024-10-13 14:06:34.545884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.087 [2024-10-13 14:06:34.545899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.087 [2024-10-13 14:06:34.558886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.087 [2024-10-13 14:06:34.558902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.087 [2024-10-13 14:06:34.572409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.087 [2024-10-13 14:06:34.572424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.087 [2024-10-13 14:06:34.585058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.087 [2024-10-13 14:06:34.585076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.087 [2024-10-13 14:06:34.597905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.087 [2024-10-13 14:06:34.597920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.087 [2024-10-13 14:06:34.610931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.087 [2024-10-13 14:06:34.610946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.087 [2024-10-13 14:06:34.624121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.087 [2024-10-13 14:06:34.624136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.087 [2024-10-13 14:06:34.636801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.087 [2024-10-13 14:06:34.636817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.087 [2024-10-13 14:06:34.649960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.087 [2024-10-13 14:06:34.649974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.087 [2024-10-13 14:06:34.663169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.087 [2024-10-13 14:06:34.663184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.087 [2024-10-13 14:06:34.676317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.087 [2024-10-13 14:06:34.676332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.087 [2024-10-13 14:06:34.689075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.087 [2024-10-13 14:06:34.689090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.087 [2024-10-13 14:06:34.701961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.087 [2024-10-13 14:06:34.701976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.088 [2024-10-13 14:06:34.715015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.088 [2024-10-13 14:06:34.715029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.088 [2024-10-13 14:06:34.728296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.088 [2024-10-13 14:06:34.728311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.088 [2024-10-13 14:06:34.741332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.088 [2024-10-13 14:06:34.741347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.088 [2024-10-13 14:06:34.754114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.088 [2024-10-13 14:06:34.754129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.088 [2024-10-13 14:06:34.767220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.088 [2024-10-13 14:06:34.767236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.088 [2024-10-13 14:06:34.780364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.088 [2024-10-13 14:06:34.780379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.088 [2024-10-13 14:06:34.793686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.088 [2024-10-13 14:06:34.793701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 19186.00 IOPS, 149.89 MiB/s [2024-10-13T12:06:35.056Z] [2024-10-13 14:06:34.807364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:34.807380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 [2024-10-13 14:06:34.820147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:34.820162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 [2024-10-13 14:06:34.832882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:34.832896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 [2024-10-13 14:06:34.846551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:34.846566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 [2024-10-13 14:06:34.859145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:34.859160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 [2024-10-13 14:06:34.871907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:34.871923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 [2024-10-13 14:06:34.884863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:34.884878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 [2024-10-13 14:06:34.898083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:34.898098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 [2024-10-13 14:06:34.911143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:34.911158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 [2024-10-13 14:06:34.924096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:34.924110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 [2024-10-13 14:06:34.937013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:34.937028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 [2024-10-13 14:06:34.950375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:34.950389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 [2024-10-13 14:06:34.963392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:34.963407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 [2024-10-13 14:06:34.976117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:34.976132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 [2024-10-13 14:06:34.988863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:34.988878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 [2024-10-13 14:06:35.001815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:35.001830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 [2024-10-13 14:06:35.014901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:35.014915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 [2024-10-13 14:06:35.027898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:35.027913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 [2024-10-13 14:06:35.041023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:35.041038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.349 [2024-10-13 14:06:35.053635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.349 [2024-10-13 14:06:35.053650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.067005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.067020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.080157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.080171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.093257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.093272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.106427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.106442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.119287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.119302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.132822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.132836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.145418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.145432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.158260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.158274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.171546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.171560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.184340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.184359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.197916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.197931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.210807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.210822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.224256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.224271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.236397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.236411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.249041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.249055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.262347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.262362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.275736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.275751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.288921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.288935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.301591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.301606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.610 [2024-10-13 14:06:35.314641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.610 [2024-10-13 14:06:35.314656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.327461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.327476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.340965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.340980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.354341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.354356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.367315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.367329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.380986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.381001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.394334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.394349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.407117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.407132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.419597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.419612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.432019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.432038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.444610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.444625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.457252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.457267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.470604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.470619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.484100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.484115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.496580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.496595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.510059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.510078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.523213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.523229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.536643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.536659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.549843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.549858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.563257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.563272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.871 [2024-10-13 14:06:35.576196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.871 [2024-10-13 14:06:35.576212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.131 [2024-10-13 14:06:35.589571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.131 [2024-10-13 14:06:35.589586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.131 [2024-10-13 14:06:35.603031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.131 [2024-10-13 14:06:35.603047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.131 [2024-10-13 14:06:35.616075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.131 [2024-10-13 14:06:35.616090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.131 [2024-10-13 14:06:35.629439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.131 [2024-10-13 14:06:35.629454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.131 [2024-10-13 14:06:35.642747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.131 [2024-10-13 14:06:35.642762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.131 [2024-10-13 14:06:35.655943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.131 [2024-10-13 14:06:35.655959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.131 [2024-10-13 14:06:35.669129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.131 [2024-10-13 14:06:35.669144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.131 [2024-10-13 14:06:35.682593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.131 [2024-10-13 14:06:35.682613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.131 [2024-10-13 14:06:35.695968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.131 [2024-10-13 14:06:35.695983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.131 [2024-10-13 14:06:35.708961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.131 [2024-10-13 14:06:35.708976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.131 [2024-10-13 14:06:35.722177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.131 [2024-10-13 14:06:35.722193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.131 [2024-10-13 14:06:35.735863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.131 [2024-10-13 14:06:35.735879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.131 [2024-10-13 14:06:35.748796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.131 [2024-10-13 14:06:35.748811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.131 [2024-10-13 14:06:35.762178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.131 [2024-10-13 14:06:35.762193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.131 [2024-10-13 14:06:35.775376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.131 [2024-10-13 14:06:35.775391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.131 [2024-10-13 14:06:35.788206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.131 [2024-10-13 14:06:35.788221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.131 [2024-10-13 14:06:35.801875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.131 [2024-10-13 14:06:35.801891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.132 19254.00 IOPS, 150.42 MiB/s [2024-10-13T12:06:35.839Z] [2024-10-13 14:06:35.814716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.132 [2024-10-13 14:06:35.814731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.132 [2024-10-13 14:06:35.827768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.132 [2024-10-13 14:06:35.827783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.391 [2024-10-13 14:06:35.840782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.391 [2024-10-13 14:06:35.840797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.391 [2024-10-13 14:06:35.853821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.391 [2024-10-13 14:06:35.853836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.391 [2024-10-13 14:06:35.866735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.391 [2024-10-13 14:06:35.866750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.391 [2024-10-13 14:06:35.880344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.391 [2024-10-13 14:06:35.880359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.391 [2024-10-13 14:06:35.893482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.391 [2024-10-13 14:06:35.893498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.391 [2024-10-13 14:06:35.906902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.391 [2024-10-13 14:06:35.906917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.391 [2024-10-13 14:06:35.919577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.391 [2024-10-13 14:06:35.919592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.391 [2024-10-13 14:06:35.933268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.391 [2024-10-13 14:06:35.933283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.391 [2024-10-13 14:06:35.945926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.391 [2024-10-13 14:06:35.945941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.391 [2024-10-13 14:06:35.958495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.391 [2024-10-13 14:06:35.958511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.391 [2024-10-13 14:06:35.971283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.391 [2024-10-13 14:06:35.971298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.391 [2024-10-13 14:06:35.984626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.391 [2024-10-13 14:06:35.984641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.392 [2024-10-13 14:06:35.997745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.392 [2024-10-13 14:06:35.997761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.392 [2024-10-13 14:06:36.011303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.392 [2024-10-13 14:06:36.011318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.392 [2024-10-13 14:06:36.024762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.392 [2024-10-13 14:06:36.024777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.392 [2024-10-13 14:06:36.037591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.392 [2024-10-13 14:06:36.037606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.392 [2024-10-13 14:06:36.050813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.392 [2024-10-13 14:06:36.050828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.392 [2024-10-13 14:06:36.063922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.392 [2024-10-13 14:06:36.063938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.392 [2024-10-13 14:06:36.077057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.392 [2024-10-13 14:06:36.077076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.392 [2024-10-13 14:06:36.090449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.392 [2024-10-13 14:06:36.090464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.103839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.103855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.117266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.117281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.130528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.130545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.143691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.143706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.156964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.156979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.170623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.170638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.184239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.184253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.197596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.197611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.211131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.211146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.223702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.223717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.237030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.237045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.250363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.250378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.263247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.263261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.276456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.276471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.289297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.289312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.301774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.301789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.315080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.315094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.327976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.327990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.341404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.341419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.652 [2024-10-13 14:06:36.355206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.652 [2024-10-13 14:06:36.355221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.912 [2024-10-13 14:06:36.368426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.912 [2024-10-13 14:06:36.368441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.912 [2024-10-13 14:06:36.380944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.912 [2024-10-13 14:06:36.380958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.912 [2024-10-13 14:06:36.393962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.912 [2024-10-13 14:06:36.393977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.912 [2024-10-13 14:06:36.407595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.912 [2024-10-13 14:06:36.407610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.912 [2024-10-13 14:06:36.420641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.912 [2024-10-13 14:06:36.420656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.912 [2024-10-13 14:06:36.434375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.912 [2024-10-13 14:06:36.434390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.912 [2024-10-13 14:06:36.447025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.912 [2024-10-13 14:06:36.447040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.912 [2024-10-13 14:06:36.460394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.912 [2024-10-13 14:06:36.460409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.912 [2024-10-13 14:06:36.474144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.912 [2024-10-13 14:06:36.474159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.912 [2024-10-13 14:06:36.487364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.912 [2024-10-13 14:06:36.487379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.912 [2024-10-13 14:06:36.500719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.912 [2024-10-13 14:06:36.500735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.912 [2024-10-13 14:06:36.513521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.912 [2024-10-13 14:06:36.513535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.912 [2024-10-13 14:06:36.526598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.912 [2024-10-13 14:06:36.526612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.912 [2024-10-13 14:06:36.540195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.912 [2024-10-13 14:06:36.540210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.912 [2024-10-13 14:06:36.553657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.912 [2024-10-13 14:06:36.553672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.912 [2024-10-13 14:06:36.566762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.912 [2024-10-13 14:06:36.566777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.912 [2024-10-13 14:06:36.579295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.912 [2024-10-13 14:06:36.579310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.912 [2024-10-13 14:06:36.592055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.912 [2024-10-13 14:06:36.592075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.912 [2024-10-13 14:06:36.605251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.913 [2024-10-13 14:06:36.605267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.913 [2024-10-13 14:06:36.618633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.913 [2024-10-13 14:06:36.618648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.173 [2024-10-13 14:06:36.631912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.173 [2024-10-13 14:06:36.631927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.173 [2024-10-13 14:06:36.645477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.173 [2024-10-13 14:06:36.645492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.173 [2024-10-13 14:06:36.658999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.173 [2024-10-13 14:06:36.659013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.173 [2024-10-13 14:06:36.672670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.173 [2024-10-13 14:06:36.672685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.173 [2024-10-13 14:06:36.685298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.173 [2024-10-13 14:06:36.685313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.173 [2024-10-13 14:06:36.698548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.173 [2024-10-13 14:06:36.698564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.173 [2024-10-13 14:06:36.712130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.173 [2024-10-13 14:06:36.712146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.173 [2024-10-13 14:06:36.725241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.173 [2024-10-13 14:06:36.725255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.173 [2024-10-13 14:06:36.738026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.173 [2024-10-13 14:06:36.738041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.173 [2024-10-13 14:06:36.751355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.173 [2024-10-13 14:06:36.751369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.173 [2024-10-13 14:06:36.764518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.173 [2024-10-13 14:06:36.764533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.173 [2024-10-13 14:06:36.777985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.173 [2024-10-13 14:06:36.777999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.173 [2024-10-13 14:06:36.791706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.173 [2024-10-13 14:06:36.791721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.173 [2024-10-13 14:06:36.805022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.173 [2024-10-13 14:06:36.805036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.173 19246.00 IOPS, 150.36 MiB/s [2024-10-13T12:06:36.880Z] [2024-10-13 14:06:36.818309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.173 [2024-10-13 14:06:36.818324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.173 [2024-10-13 14:06:36.831751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.173 [2024-10-13 14:06:36.831766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.173 [2024-10-13 14:06:36.845092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.173 [2024-10-13 14:06:36.845107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.173 [2024-10-13 14:06:36.858396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.173 [2024-10-13 14:06:36.858410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.173 [2024-10-13 14:06:36.871367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.173 [2024-10-13 14:06:36.871381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.433 [2024-10-13 14:06:36.883793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.433 [2024-10-13 14:06:36.883808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.433 [2024-10-13 14:06:36.896806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.433 [2024-10-13 14:06:36.896821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.433 [2024-10-13 14:06:36.909547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.433 [2024-10-13 14:06:36.909562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.433 [2024-10-13 14:06:36.923033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.433 [2024-10-13 14:06:36.923051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.433 [2024-10-13 14:06:36.936098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.433 [2024-10-13 14:06:36.936112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.433 [2024-10-13 14:06:36.949157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.433 [2024-10-13 14:06:36.949171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.433 [2024-10-13 14:06:36.962050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.433 [2024-10-13 14:06:36.962068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.433 [2024-10-13 14:06:36.975710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.433 [2024-10-13 14:06:36.975724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.434 [2024-10-13 14:06:36.988498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.434 [2024-10-13 14:06:36.988513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.434 [2024-10-13 14:06:37.001561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.434 [2024-10-13 14:06:37.001575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.434 [2024-10-13 14:06:37.014212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.434 [2024-10-13 14:06:37.014226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.434 [2024-10-13 14:06:37.027437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.434 [2024-10-13 14:06:37.027452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.434 [2024-10-13 14:06:37.040560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.434 [2024-10-13 14:06:37.040575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.434 [2024-10-13 14:06:37.054177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.434 [2024-10-13 14:06:37.054191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.434 [2024-10-13 14:06:37.067202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.434 [2024-10-13 14:06:37.067217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.434 [2024-10-13 14:06:37.079756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.434 [2024-10-13 14:06:37.079770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.434 [2024-10-13 14:06:37.093239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.434 [2024-10-13 14:06:37.093256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.434 [2024-10-13 14:06:37.106491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.434 [2024-10-13 14:06:37.106507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.434 [2024-10-13 14:06:37.119730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.434 [2024-10-13 14:06:37.119745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.434 [2024-10-13 14:06:37.132307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.434 [2024-10-13 14:06:37.132323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.145683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.145699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.158739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.158755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.171579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.171599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.184777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.184792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.197662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.197678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.210305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.210320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.223091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.223107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.235707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.235723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.248575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.248591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.261969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.261985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.275323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.275338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.288766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.288781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.301078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.301093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.313633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.313648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.326487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.326502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.340102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.340118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.353128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.353143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.366617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.366632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.379769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.379784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.694 [2024-10-13 14:06:37.392744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.694 [2024-10-13 14:06:37.392758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.405778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.405793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.418865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.418885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.431192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.431207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.444076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.444091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.456824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.456839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.470031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.470048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.483537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.483552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.497133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.497148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.510010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.510025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.523558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.523573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.536850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.536865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.549667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.549682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.562753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.562769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.575825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.575840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.588592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.588607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.602313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.602329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.614646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.614661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.628240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.628254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.640780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.640795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.955 [2024-10-13 14:06:37.654024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.955 [2024-10-13 14:06:37.654040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 [2024-10-13 14:06:37.667794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.667810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 [2024-10-13 14:06:37.680699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.680714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 [2024-10-13 14:06:37.693606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.693621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 [2024-10-13 14:06:37.707199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.707215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 [2024-10-13 14:06:37.720155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.720171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 [2024-10-13 14:06:37.733488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.733503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 [2024-10-13 14:06:37.746246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.746261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 [2024-10-13 14:06:37.758643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.758658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 [2024-10-13 14:06:37.771761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.771777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 [2024-10-13 14:06:37.784937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.784952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 [2024-10-13 14:06:37.798196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.798211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 19238.00 IOPS, 150.30 MiB/s [2024-10-13T12:06:37.923Z] [2024-10-13 14:06:37.810603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.810618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 [2024-10-13 14:06:37.822615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.822630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 [2024-10-13 14:06:37.835884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.835900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 [2024-10-13 14:06:37.849611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.849627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 [2024-10-13 14:06:37.862463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.862477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 [2024-10-13 14:06:37.875371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.875386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 [2024-10-13 14:06:37.887501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.887516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 [2024-10-13 14:06:37.899837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.899851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.216 [2024-10-13 14:06:37.913044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.216 [2024-10-13 14:06:37.913059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:37.926433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:37.926448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:37.940039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:37.940054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:37.952888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:37.952903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:37.965319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:37.965333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:37.978404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:37.978419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:37.991513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:37.991528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:38.005014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:38.005029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:38.017624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:38.017640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:38.031148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:38.031163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:38.044456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:38.044471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:38.058017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:38.058032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:38.070687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:38.070702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:38.083581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:38.083596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:38.097082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:38.097097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:38.109848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:38.109864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:38.122914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:38.122929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:38.136188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:38.136203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:38.149384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:38.149399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:38.163085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:38.163100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.476 [2024-10-13 14:06:38.175448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.476 [2024-10-13 14:06:38.175464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.188337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.188352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.202146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.202162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.214802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.214817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.228415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.228431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.241801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.241816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.255106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.255121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.268054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.268074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.281491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.281506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.294314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.294329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.307824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.307840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.321077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.321093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.334040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.334055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.347283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.347298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.360128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.360144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.373673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.373688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.387106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.387121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.400141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.400160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.413429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.413444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.425903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.425918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.737 [2024-10-13 14:06:38.439139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.737 [2024-10-13 14:06:38.439154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.997 [2024-10-13 14:06:38.452095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.997 [2024-10-13 14:06:38.452110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.997 [2024-10-13 14:06:38.465161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.997 [2024-10-13 14:06:38.465176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.997 [2024-10-13 14:06:38.478207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.997 [2024-10-13 14:06:38.478222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.997 [2024-10-13 14:06:38.491558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.997 [2024-10-13 14:06:38.491573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.998 [2024-10-13 14:06:38.504011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.998 [2024-10-13 14:06:38.504026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.998 [2024-10-13 14:06:38.517490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.998 [2024-10-13 14:06:38.517506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.998 [2024-10-13 14:06:38.529803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.998 [2024-10-13 14:06:38.529818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.998 [2024-10-13 14:06:38.543222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.998 [2024-10-13 14:06:38.543237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.998 [2024-10-13 14:06:38.556085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.998 [2024-10-13 14:06:38.556100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.998 [2024-10-13 14:06:38.569079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.998 [2024-10-13 14:06:38.569094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.998 [2024-10-13 14:06:38.581969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.998 [2024-10-13 14:06:38.581984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.998 [2024-10-13 14:06:38.594890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.998 [2024-10-13 14:06:38.594905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.998 [2024-10-13 14:06:38.607195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.998 [2024-10-13 14:06:38.607210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.998 [2024-10-13 14:06:38.620539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.998 [2024-10-13 14:06:38.620554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.998 [2024-10-13 14:06:38.634108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.998 [2024-10-13 14:06:38.634123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.998 [2024-10-13 14:06:38.647128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.998 [2024-10-13 14:06:38.647146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.998 [2024-10-13 14:06:38.659986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.998 [2024-10-13 14:06:38.660001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.998 [2024-10-13 14:06:38.673545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.998 [2024-10-13 14:06:38.673559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.998 [2024-10-13 14:06:38.685986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.998 [2024-10-13 14:06:38.686000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.998 [2024-10-13 14:06:38.698763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.998 [2024-10-13 14:06:38.698778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.258 [2024-10-13 14:06:38.711218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.258 [2024-10-13 14:06:38.711233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.258 [2024-10-13 14:06:38.723501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.258 [2024-10-13 14:06:38.723516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.258 [2024-10-13 14:06:38.736281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.258 [2024-10-13 14:06:38.736296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.258 [2024-10-13 14:06:38.749810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.258 [2024-10-13 14:06:38.749826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.258 [2024-10-13 14:06:38.762162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.258 [2024-10-13 14:06:38.762177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.258 [2024-10-13 14:06:38.774470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.259 [2024-10-13 14:06:38.774485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.259 [2024-10-13 14:06:38.787668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.259 [2024-10-13 14:06:38.787682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.259 [2024-10-13 14:06:38.801075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.259 [2024-10-13 14:06:38.801090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.259 19245.40 IOPS, 150.35 MiB/s [2024-10-13T12:06:38.966Z] [2024-10-13 14:06:38.811007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.259 [2024-10-13 14:06:38.811021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.259 00:10:35.259 Latency(us) 00:10:35.259 [2024-10-13T12:06:38.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:35.259 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:35.259 Nvme1n1 : 5.01 19247.14 150.37 0.00 0.00 6644.24 2997.07 15655.94 00:10:35.259 [2024-10-13T12:06:38.966Z] =================================================================================================================== 00:10:35.259 [2024-10-13T12:06:38.966Z] Total : 19247.14 150.37 0.00 0.00 6644.24 2997.07 15655.94 00:10:35.259 [2024-10-13 14:06:38.823010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.259 [2024-10-13 14:06:38.823026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.259 [2024-10-13 14:06:38.835018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.259 [2024-10-13 14:06:38.835028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.259 [2024-10-13 14:06:38.847020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.259 [2024-10-13 14:06:38.847030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.259 [2024-10-13 14:06:38.859020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.259 [2024-10-13 14:06:38.859033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.259 [2024-10-13 14:06:38.871018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.259 [2024-10-13 14:06:38.871028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.259 [2024-10-13 14:06:38.883022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.259 [2024-10-13 14:06:38.883031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.259 [2024-10-13 14:06:38.895027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.259 [2024-10-13 14:06:38.895038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.259 [2024-10-13 14:06:38.907027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.259 [2024-10-13 14:06:38.907035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1544022) - No such process 00:10:35.259 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1544022 00:10:35.259 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.259 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.259 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.259 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.259 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:35.259 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.259 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.259 delay0 00:10:35.259 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.259 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:35.259 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.259 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.259 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.259 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:35.520 [2024-10-13 14:06:39.158448] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:43.703 Initializing NVMe Controllers 00:10:43.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:43.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:43.703 Initialization complete. Launching workers. 00:10:43.703 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 261, failed: 27256 00:10:43.703 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 27416, failed to submit 101 00:10:43.703 success 27320, unsuccessful 96, failed 0 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:43.703 rmmod nvme_tcp 00:10:43.703 rmmod nvme_fabrics 00:10:43.703 rmmod nvme_keyring 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1541745 ']' 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1541745 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1541745 ']' 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1541745 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1541745 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1541745' 00:10:43.703 killing process with pid 1541745 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1541745 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1541745 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.703 14:06:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.086 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:45.086 00:10:45.086 real 0m34.706s 00:10:45.086 user 0m45.267s 00:10:45.086 sys 0m11.812s 00:10:45.086 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.086 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.086 ************************************ 00:10:45.086 END TEST nvmf_zcopy 00:10:45.086 ************************************ 00:10:45.086 14:06:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:45.086 14:06:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:45.087 14:06:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:45.087 14:06:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:45.087 ************************************ 00:10:45.087 START TEST nvmf_nmic 00:10:45.087 ************************************ 00:10:45.087 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:45.348 * Looking for test storage... 00:10:45.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:45.348 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:45.348 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:10:45.348 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:45.348 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:45.348 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.348 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.348 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.348 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.348 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.348 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.348 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.348 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.348 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.348 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.348 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.348 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:45.348 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:45.348 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:45.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.349 --rc genhtml_branch_coverage=1 00:10:45.349 --rc genhtml_function_coverage=1 00:10:45.349 --rc genhtml_legend=1 00:10:45.349 --rc geninfo_all_blocks=1 00:10:45.349 --rc geninfo_unexecuted_blocks=1 00:10:45.349 00:10:45.349 ' 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:45.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.349 --rc genhtml_branch_coverage=1 00:10:45.349 --rc genhtml_function_coverage=1 00:10:45.349 --rc genhtml_legend=1 00:10:45.349 --rc geninfo_all_blocks=1 00:10:45.349 --rc geninfo_unexecuted_blocks=1 00:10:45.349 00:10:45.349 ' 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:45.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.349 --rc genhtml_branch_coverage=1 00:10:45.349 --rc genhtml_function_coverage=1 00:10:45.349 --rc genhtml_legend=1 00:10:45.349 --rc geninfo_all_blocks=1 00:10:45.349 --rc geninfo_unexecuted_blocks=1 00:10:45.349 00:10:45.349 ' 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:45.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.349 --rc genhtml_branch_coverage=1 00:10:45.349 --rc genhtml_function_coverage=1 00:10:45.349 --rc genhtml_legend=1 00:10:45.349 --rc geninfo_all_blocks=1 00:10:45.349 --rc geninfo_unexecuted_blocks=1 00:10:45.349 00:10:45.349 ' 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:45.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:45.349 14:06:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:53.491 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:53.491 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:53.491 Found net devices under 0000:31:00.0: cvl_0_0 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:53.491 Found net devices under 0000:31:00.1: cvl_0_1 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:53.491 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:53.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:10:53.492 00:10:53.492 --- 10.0.0.2 ping statistics --- 00:10:53.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.492 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:10:53.492 00:10:53.492 --- 10.0.0.1 ping statistics --- 00:10:53.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.492 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1550951 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1550951 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1550951 ']' 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.492 14:06:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.492 [2024-10-13 14:06:56.702040] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:10:53.492 [2024-10-13 14:06:56.702130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.492 [2024-10-13 14:06:56.844734] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:53.492 [2024-10-13 14:06:56.894661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.492 [2024-10-13 14:06:56.924008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.492 [2024-10-13 14:06:56.924055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.492 [2024-10-13 14:06:56.924075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.492 [2024-10-13 14:06:56.924083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.492 [2024-10-13 14:06:56.924089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.492 [2024-10-13 14:06:56.926378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.492 [2024-10-13 14:06:56.926535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.492 [2024-10-13 14:06:56.926693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.492 [2024-10-13 14:06:56.926694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.064 [2024-10-13 14:06:57.573538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.064 Malloc0 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.064 [2024-10-13 14:06:57.648221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:54.064 test case1: single bdev can't be used in multiple subsystems 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.064 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.065 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:54.065 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.065 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.065 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.065 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:54.065 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:54.065 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.065 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.065 [2024-10-13 14:06:57.683947] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:54.065 [2024-10-13 14:06:57.683974] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:54.065 [2024-10-13 14:06:57.683983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.065 request: 00:10:54.065 { 00:10:54.065 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:54.065 "namespace": { 00:10:54.065 "bdev_name": "Malloc0", 00:10:54.065 "no_auto_visible": false 00:10:54.065 }, 00:10:54.065 "method": "nvmf_subsystem_add_ns", 00:10:54.065 "req_id": 1 00:10:54.065 } 00:10:54.065 Got JSON-RPC error response 00:10:54.065 response: 00:10:54.065 { 00:10:54.065 "code": -32602, 00:10:54.065 "message": "Invalid parameters" 00:10:54.065 } 00:10:54.065 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:54.065 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:54.065 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:54.065 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:54.065 Adding namespace failed - expected result. 00:10:54.065 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:54.065 test case2: host connect to nvmf target in multiple paths 00:10:54.065 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:54.065 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.065 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.065 [2024-10-13 14:06:57.696137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:54.065 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.065 14:06:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.978 14:06:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:57.362 14:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:57.362 14:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:57.362 14:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:57.362 14:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:57.362 14:07:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:59.274 14:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:59.274 14:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:59.274 14:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:59.274 14:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:59.274 14:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.274 14:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:59.274 14:07:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:59.274 [global] 00:10:59.274 thread=1 00:10:59.274 invalidate=1 00:10:59.274 rw=write 00:10:59.274 time_based=1 00:10:59.274 runtime=1 00:10:59.274 ioengine=libaio 00:10:59.274 direct=1 00:10:59.274 bs=4096 00:10:59.274 iodepth=1 00:10:59.274 norandommap=0 00:10:59.274 numjobs=1 00:10:59.274 00:10:59.274 verify_dump=1 00:10:59.274 verify_backlog=512 00:10:59.274 verify_state_save=0 00:10:59.274 do_verify=1 00:10:59.274 verify=crc32c-intel 00:10:59.274 [job0] 00:10:59.274 filename=/dev/nvme0n1 00:10:59.274 Could not set queue depth (nvme0n1) 00:10:59.843 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.843 fio-3.35 00:10:59.843 Starting 1 thread 00:11:00.785 00:11:00.785 job0: (groupid=0, jobs=1): err= 0: pid=1552348: Sun Oct 13 14:07:04 2024 00:11:00.785 read: IOPS=18, BW=73.6KiB/s (75.4kB/s)(76.0KiB/1032msec) 00:11:00.785 slat (nsec): min=6882, max=27640, avg=26230.68, stdev=4687.64 00:11:00.785 clat (usec): min=40914, max=41608, avg=40998.29, stdev=149.71 00:11:00.785 lat (usec): min=40942, max=41615, avg=41024.52, stdev=145.08 00:11:00.785 clat percentiles (usec): 00:11:00.785 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:00.785 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:00.785 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:00.785 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:00.785 | 99.99th=[41681] 00:11:00.785 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:11:00.785 slat (usec): min=2, max=24758, avg=73.80, stdev=1093.12 00:11:00.785 clat (usec): min=190, max=1988, avg=412.72, stdev=104.35 00:11:00.785 lat (usec): min=203, max=25111, avg=486.52, stdev=1095.80 00:11:00.785 clat percentiles (usec): 00:11:00.785 | 1.00th=[ 247], 5.00th=[ 281], 10.00th=[ 306], 20.00th=[ 338], 00:11:00.785 | 30.00th=[ 355], 40.00th=[ 383], 50.00th=[ 433], 60.00th=[ 449], 00:11:00.785 | 70.00th=[ 469], 80.00th=[ 478], 90.00th=[ 494], 95.00th=[ 506], 00:11:00.785 | 99.00th=[ 578], 99.50th=[ 676], 99.90th=[ 1991], 99.95th=[ 1991], 00:11:00.785 | 99.99th=[ 1991] 00:11:00.785 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:00.785 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:00.785 lat (usec) : 250=1.51%, 500=87.57%, 750=7.16% 00:11:00.785 lat (msec) : 2=0.19%, 50=3.58% 00:11:00.785 cpu : usr=1.07%, sys=0.78%, ctx=534, majf=0, minf=1 00:11:00.785 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.785 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.785 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.785 00:11:00.785 Run status group 0 (all jobs): 00:11:00.785 READ: bw=73.6KiB/s (75.4kB/s), 73.6KiB/s-73.6KiB/s (75.4kB/s-75.4kB/s), io=76.0KiB (77.8kB), run=1032-1032msec 00:11:00.785 WRITE: bw=1984KiB/s (2032kB/s), 1984KiB/s-1984KiB/s (2032kB/s-2032kB/s), io=2048KiB (2097kB), run=1032-1032msec 00:11:00.785 00:11:00.785 Disk stats (read/write): 00:11:00.785 nvme0n1: ios=67/512, merge=0/0, ticks=985/213, in_queue=1198, util=98.50% 00:11:00.785 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:01.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:01.045 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:01.045 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:01.045 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:01.045 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.045 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:01.045 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.045 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:01.045 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:01.045 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:01.045 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:01.045 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:01.045 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:01.045 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:01.045 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:01.045 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:01.045 rmmod nvme_tcp 00:11:01.045 rmmod nvme_fabrics 00:11:01.045 rmmod nvme_keyring 00:11:01.045 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:01.045 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:01.045 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:01.046 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1550951 ']' 00:11:01.046 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1550951 00:11:01.046 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1550951 ']' 00:11:01.046 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1550951 00:11:01.046 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:01.046 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:01.046 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1550951 00:11:01.046 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:01.046 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:01.046 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1550951' 00:11:01.046 killing process with pid 1550951 00:11:01.046 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1550951 00:11:01.046 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1550951 00:11:01.349 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:01.349 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:01.349 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:01.349 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:01.349 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:11:01.349 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:01.349 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:11:01.349 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:01.349 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:01.349 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.349 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.349 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.413 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:03.413 00:11:03.413 real 0m18.207s 00:11:03.413 user 0m48.809s 00:11:03.413 sys 0m6.669s 00:11:03.413 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.413 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:03.413 ************************************ 00:11:03.413 END TEST nvmf_nmic 00:11:03.413 ************************************ 00:11:03.413 14:07:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:03.413 14:07:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:03.413 14:07:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.413 14:07:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:03.413 ************************************ 00:11:03.413 START TEST nvmf_fio_target 00:11:03.413 ************************************ 00:11:03.413 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:03.413 * Looking for test storage... 00:11:03.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:03.413 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:03.413 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:03.413 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:03.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.675 --rc genhtml_branch_coverage=1 00:11:03.675 --rc genhtml_function_coverage=1 00:11:03.675 --rc genhtml_legend=1 00:11:03.675 --rc geninfo_all_blocks=1 00:11:03.675 --rc geninfo_unexecuted_blocks=1 00:11:03.675 00:11:03.675 ' 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:03.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.675 --rc genhtml_branch_coverage=1 00:11:03.675 --rc genhtml_function_coverage=1 00:11:03.675 --rc genhtml_legend=1 00:11:03.675 --rc geninfo_all_blocks=1 00:11:03.675 --rc geninfo_unexecuted_blocks=1 00:11:03.675 00:11:03.675 ' 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:03.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.675 --rc genhtml_branch_coverage=1 00:11:03.675 --rc genhtml_function_coverage=1 00:11:03.675 --rc genhtml_legend=1 00:11:03.675 --rc geninfo_all_blocks=1 00:11:03.675 --rc geninfo_unexecuted_blocks=1 00:11:03.675 00:11:03.675 ' 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:03.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.675 --rc genhtml_branch_coverage=1 00:11:03.675 --rc genhtml_function_coverage=1 00:11:03.675 --rc genhtml_legend=1 00:11:03.675 --rc geninfo_all_blocks=1 00:11:03.675 --rc geninfo_unexecuted_blocks=1 00:11:03.675 00:11:03.675 ' 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.675 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:03.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:03.676 14:07:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:11.819 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:11.819 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:11.819 Found net devices under 0000:31:00.0: cvl_0_0 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:11.819 Found net devices under 0000:31:00.1: cvl_0_1 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:11.819 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:11.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:11:11.819 00:11:11.819 --- 10.0.0.2 ping statistics --- 00:11:11.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.819 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:11.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:11:11.820 00:11:11.820 --- 10.0.0.1 ping statistics --- 00:11:11.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.820 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1557083 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1557083 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1557083 ']' 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:11.820 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.820 [2024-10-13 14:07:15.023500] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:11:11.820 [2024-10-13 14:07:15.023565] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.820 [2024-10-13 14:07:15.165316] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:11.820 [2024-10-13 14:07:15.213654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.820 [2024-10-13 14:07:15.242100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.820 [2024-10-13 14:07:15.242142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.820 [2024-10-13 14:07:15.242151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.820 [2024-10-13 14:07:15.242157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.820 [2024-10-13 14:07:15.242163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.820 [2024-10-13 14:07:15.244088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.820 [2024-10-13 14:07:15.244199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.820 [2024-10-13 14:07:15.244484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.820 [2024-10-13 14:07:15.244486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.391 14:07:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:12.391 14:07:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:12.391 14:07:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:12.391 14:07:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:12.391 14:07:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.392 14:07:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.392 14:07:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:12.392 [2024-10-13 14:07:16.063660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:12.653 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.653 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:12.653 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.913 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:12.913 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.174 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:13.174 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.435 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:13.435 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:13.695 14:07:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.695 14:07:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:13.695 14:07:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.955 14:07:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:13.955 14:07:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:14.215 14:07:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:14.215 14:07:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:14.474 14:07:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:14.474 14:07:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:14.474 14:07:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:14.734 14:07:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:14.734 14:07:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.993 14:07:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.993 [2024-10-13 14:07:18.602278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.993 14:07:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:15.254 14:07:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:15.514 14:07:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:16.900 14:07:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:16.900 14:07:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:16.900 14:07:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.900 14:07:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:16.900 14:07:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:16.900 14:07:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:19.444 14:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:19.444 14:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:19.444 14:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:19.444 14:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:19.444 14:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:19.444 14:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:19.444 14:07:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:19.444 [global] 00:11:19.444 thread=1 00:11:19.444 invalidate=1 00:11:19.444 rw=write 00:11:19.444 time_based=1 00:11:19.444 runtime=1 00:11:19.444 ioengine=libaio 00:11:19.444 direct=1 00:11:19.444 bs=4096 00:11:19.444 iodepth=1 00:11:19.444 norandommap=0 00:11:19.444 numjobs=1 00:11:19.444 00:11:19.444 verify_dump=1 00:11:19.444 verify_backlog=512 00:11:19.444 verify_state_save=0 00:11:19.444 do_verify=1 00:11:19.444 verify=crc32c-intel 00:11:19.444 [job0] 00:11:19.444 filename=/dev/nvme0n1 00:11:19.444 [job1] 00:11:19.444 filename=/dev/nvme0n2 00:11:19.444 [job2] 00:11:19.444 filename=/dev/nvme0n3 00:11:19.444 [job3] 00:11:19.444 filename=/dev/nvme0n4 00:11:19.444 Could not set queue depth (nvme0n1) 00:11:19.444 Could not set queue depth (nvme0n2) 00:11:19.444 Could not set queue depth (nvme0n3) 00:11:19.444 Could not set queue depth (nvme0n4) 00:11:19.444 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.444 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.444 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.444 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.444 fio-3.35 00:11:19.444 Starting 4 threads 00:11:20.827 00:11:20.827 job0: (groupid=0, jobs=1): err= 0: pid=1558976: Sun Oct 13 14:07:24 2024 00:11:20.827 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:20.827 slat (nsec): min=25500, max=44319, avg=26435.00, stdev=2272.40 00:11:20.827 clat (usec): min=748, max=1220, avg=1012.01, stdev=83.08 00:11:20.827 lat (usec): min=774, max=1246, avg=1038.44, stdev=83.09 00:11:20.827 clat percentiles (usec): 00:11:20.827 | 1.00th=[ 775], 5.00th=[ 832], 10.00th=[ 898], 20.00th=[ 963], 00:11:20.827 | 30.00th=[ 996], 40.00th=[ 1012], 50.00th=[ 1029], 60.00th=[ 1045], 00:11:20.827 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1123], 00:11:20.827 | 99.00th=[ 1156], 99.50th=[ 1205], 99.90th=[ 1221], 99.95th=[ 1221], 00:11:20.827 | 99.99th=[ 1221] 00:11:20.827 write: IOPS=724, BW=2897KiB/s (2967kB/s)(2900KiB/1001msec); 0 zone resets 00:11:20.827 slat (nsec): min=10042, max=53627, avg=29687.52, stdev=10318.61 00:11:20.827 clat (usec): min=228, max=988, avg=603.24, stdev=120.75 00:11:20.827 lat (usec): min=238, max=1022, avg=632.92, stdev=125.52 00:11:20.827 clat percentiles (usec): 00:11:20.827 | 1.00th=[ 277], 5.00th=[ 383], 10.00th=[ 445], 20.00th=[ 490], 00:11:20.827 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:11:20.827 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 766], 00:11:20.827 | 99.00th=[ 881], 99.50th=[ 906], 99.90th=[ 988], 99.95th=[ 988], 00:11:20.827 | 99.99th=[ 988] 00:11:20.827 bw ( KiB/s): min= 4096, max= 4096, per=36.75%, avg=4096.00, stdev= 0.00, samples=1 00:11:20.827 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:20.827 lat (usec) : 250=0.08%, 500=12.85%, 750=41.07%, 1000=18.84% 00:11:20.827 lat (msec) : 2=27.16% 00:11:20.827 cpu : usr=1.90%, sys=3.50%, ctx=1240, majf=0, minf=1 00:11:20.827 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.827 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.827 issued rwts: total=512,725,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.827 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.827 job1: (groupid=0, jobs=1): err= 0: pid=1558993: Sun Oct 13 14:07:24 2024 00:11:20.827 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:20.827 slat (nsec): min=7110, max=62361, avg=27168.87, stdev=3816.51 00:11:20.827 clat (usec): min=789, max=1472, avg=1090.62, stdev=105.47 00:11:20.827 lat (usec): min=816, max=1499, avg=1117.79, stdev=105.37 00:11:20.827 clat percentiles (usec): 00:11:20.827 | 1.00th=[ 840], 5.00th=[ 914], 10.00th=[ 971], 20.00th=[ 1004], 00:11:20.827 | 30.00th=[ 1037], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1123], 00:11:20.827 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1254], 00:11:20.827 | 99.00th=[ 1385], 99.50th=[ 1418], 99.90th=[ 1467], 99.95th=[ 1467], 00:11:20.827 | 99.99th=[ 1467] 00:11:20.827 write: IOPS=653, BW=2613KiB/s (2676kB/s)(2616KiB/1001msec); 0 zone resets 00:11:20.827 slat (nsec): min=9276, max=69277, avg=30180.59, stdev=10089.47 00:11:20.827 clat (usec): min=259, max=962, avg=609.65, stdev=112.22 00:11:20.827 lat (usec): min=270, max=996, avg=639.83, stdev=116.76 00:11:20.827 clat percentiles (usec): 00:11:20.827 | 1.00th=[ 334], 5.00th=[ 408], 10.00th=[ 457], 20.00th=[ 519], 00:11:20.827 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 652], 00:11:20.827 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 775], 00:11:20.827 | 99.00th=[ 832], 99.50th=[ 848], 99.90th=[ 963], 99.95th=[ 963], 00:11:20.827 | 99.99th=[ 963] 00:11:20.827 bw ( KiB/s): min= 4096, max= 4096, per=36.75%, avg=4096.00, stdev= 0.00, samples=1 00:11:20.827 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:20.827 lat (usec) : 500=10.03%, 750=41.08%, 1000=13.55% 00:11:20.827 lat (msec) : 2=35.33% 00:11:20.827 cpu : usr=2.80%, sys=4.10%, ctx=1166, majf=0, minf=1 00:11:20.827 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.827 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.827 issued rwts: total=512,654,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.827 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.827 job2: (groupid=0, jobs=1): err= 0: pid=1559013: Sun Oct 13 14:07:24 2024 00:11:20.827 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:20.827 slat (nsec): min=26133, max=45418, avg=27121.17, stdev=2696.54 00:11:20.827 clat (usec): min=656, max=1467, avg=1022.66, stdev=103.16 00:11:20.827 lat (usec): min=684, max=1494, avg=1049.78, stdev=103.14 00:11:20.828 clat percentiles (usec): 00:11:20.828 | 1.00th=[ 750], 5.00th=[ 832], 10.00th=[ 889], 20.00th=[ 947], 00:11:20.828 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1029], 60.00th=[ 1045], 00:11:20.828 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1188], 00:11:20.828 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1467], 99.95th=[ 1467], 00:11:20.828 | 99.99th=[ 1467] 00:11:20.828 write: IOPS=692, BW=2769KiB/s (2836kB/s)(2772KiB/1001msec); 0 zone resets 00:11:20.828 slat (nsec): min=10068, max=79934, avg=31483.97, stdev=9874.29 00:11:20.828 clat (usec): min=255, max=916, avg=622.06, stdev=117.37 00:11:20.828 lat (usec): min=267, max=951, avg=653.54, stdev=121.17 00:11:20.828 clat percentiles (usec): 00:11:20.828 | 1.00th=[ 351], 5.00th=[ 404], 10.00th=[ 465], 20.00th=[ 515], 00:11:20.828 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 668], 00:11:20.828 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 791], 00:11:20.828 | 99.00th=[ 857], 99.50th=[ 873], 99.90th=[ 914], 99.95th=[ 914], 00:11:20.828 | 99.99th=[ 914] 00:11:20.828 bw ( KiB/s): min= 4096, max= 4096, per=36.75%, avg=4096.00, stdev= 0.00, samples=1 00:11:20.828 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:20.828 lat (usec) : 500=9.79%, 750=41.24%, 1000=22.16% 00:11:20.828 lat (msec) : 2=26.80% 00:11:20.828 cpu : usr=1.50%, sys=4.00%, ctx=1206, majf=0, minf=1 00:11:20.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.828 issued rwts: total=512,693,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.828 job3: (groupid=0, jobs=1): err= 0: pid=1559017: Sun Oct 13 14:07:24 2024 00:11:20.828 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:20.828 slat (nsec): min=25001, max=43534, avg=26198.04, stdev=2573.06 00:11:20.828 clat (usec): min=601, max=1407, avg=998.97, stdev=104.22 00:11:20.828 lat (usec): min=627, max=1433, avg=1025.17, stdev=104.24 00:11:20.828 clat percentiles (usec): 00:11:20.828 | 1.00th=[ 734], 5.00th=[ 807], 10.00th=[ 865], 20.00th=[ 922], 00:11:20.828 | 30.00th=[ 955], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1029], 00:11:20.828 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1123], 95.00th=[ 1156], 00:11:20.828 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1401], 99.95th=[ 1401], 00:11:20.828 | 99.99th=[ 1401] 00:11:20.828 write: IOPS=716, BW=2865KiB/s (2934kB/s)(2868KiB/1001msec); 0 zone resets 00:11:20.828 slat (nsec): min=9457, max=52867, avg=30817.30, stdev=8399.67 00:11:20.828 clat (usec): min=187, max=1032, avg=618.52, stdev=120.84 00:11:20.828 lat (usec): min=197, max=1064, avg=649.34, stdev=124.35 00:11:20.828 clat percentiles (usec): 00:11:20.828 | 1.00th=[ 334], 5.00th=[ 400], 10.00th=[ 457], 20.00th=[ 515], 00:11:20.828 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 635], 60.00th=[ 660], 00:11:20.828 | 70.00th=[ 693], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 799], 00:11:20.828 | 99.00th=[ 889], 99.50th=[ 938], 99.90th=[ 1029], 99.95th=[ 1029], 00:11:20.828 | 99.99th=[ 1029] 00:11:20.828 bw ( KiB/s): min= 4096, max= 4096, per=36.75%, avg=4096.00, stdev= 0.00, samples=1 00:11:20.828 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:20.828 lat (usec) : 250=0.08%, 500=9.76%, 750=42.31%, 1000=24.49% 00:11:20.828 lat (msec) : 2=23.35% 00:11:20.828 cpu : usr=1.70%, sys=4.10%, ctx=1229, majf=0, minf=1 00:11:20.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.828 issued rwts: total=512,717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.828 00:11:20.828 Run status group 0 (all jobs): 00:11:20.828 READ: bw=8184KiB/s (8380kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:11:20.828 WRITE: bw=10.9MiB/s (11.4MB/s), 2613KiB/s-2897KiB/s (2676kB/s-2967kB/s), io=10.9MiB (11.4MB), run=1001-1001msec 00:11:20.828 00:11:20.828 Disk stats (read/write): 00:11:20.828 nvme0n1: ios=508/512, merge=0/0, ticks=1321/303, in_queue=1624, util=83.87% 00:11:20.828 nvme0n2: ios=498/512, merge=0/0, ticks=519/253, in_queue=772, util=90.59% 00:11:20.828 nvme0n3: ios=519/512, merge=0/0, ticks=1110/311, in_queue=1421, util=91.95% 00:11:20.828 nvme0n4: ios=537/512, merge=0/0, ticks=581/283, in_queue=864, util=97.21% 00:11:20.828 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:20.828 [global] 00:11:20.828 thread=1 00:11:20.828 invalidate=1 00:11:20.828 rw=randwrite 00:11:20.828 time_based=1 00:11:20.828 runtime=1 00:11:20.828 ioengine=libaio 00:11:20.828 direct=1 00:11:20.828 bs=4096 00:11:20.828 iodepth=1 00:11:20.828 norandommap=0 00:11:20.828 numjobs=1 00:11:20.828 00:11:20.828 verify_dump=1 00:11:20.828 verify_backlog=512 00:11:20.828 verify_state_save=0 00:11:20.828 do_verify=1 00:11:20.828 verify=crc32c-intel 00:11:20.828 [job0] 00:11:20.828 filename=/dev/nvme0n1 00:11:20.828 [job1] 00:11:20.828 filename=/dev/nvme0n2 00:11:20.828 [job2] 00:11:20.828 filename=/dev/nvme0n3 00:11:20.828 [job3] 00:11:20.828 filename=/dev/nvme0n4 00:11:20.828 Could not set queue depth (nvme0n1) 00:11:20.828 Could not set queue depth (nvme0n2) 00:11:20.828 Could not set queue depth (nvme0n3) 00:11:20.828 Could not set queue depth (nvme0n4) 00:11:21.089 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.089 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.089 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.089 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.089 fio-3.35 00:11:21.089 Starting 4 threads 00:11:22.475 00:11:22.475 job0: (groupid=0, jobs=1): err= 0: pid=1559457: Sun Oct 13 14:07:25 2024 00:11:22.475 read: IOPS=782, BW=3129KiB/s (3204kB/s)(3132KiB/1001msec) 00:11:22.475 slat (nsec): min=6769, max=59954, avg=24799.50, stdev=5190.18 00:11:22.475 clat (usec): min=276, max=1067, avg=697.95, stdev=154.69 00:11:22.475 lat (usec): min=302, max=1092, avg=722.75, stdev=154.88 00:11:22.475 clat percentiles (usec): 00:11:22.475 | 1.00th=[ 351], 5.00th=[ 420], 10.00th=[ 498], 20.00th=[ 570], 00:11:22.475 | 30.00th=[ 603], 40.00th=[ 635], 50.00th=[ 709], 60.00th=[ 750], 00:11:22.475 | 70.00th=[ 807], 80.00th=[ 857], 90.00th=[ 889], 95.00th=[ 922], 00:11:22.475 | 99.00th=[ 963], 99.50th=[ 979], 99.90th=[ 1074], 99.95th=[ 1074], 00:11:22.475 | 99.99th=[ 1074] 00:11:22.475 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:22.475 slat (nsec): min=8926, max=64496, avg=26460.78, stdev=9691.76 00:11:22.475 clat (usec): min=96, max=741, avg=384.58, stdev=132.53 00:11:22.475 lat (usec): min=106, max=772, avg=411.04, stdev=137.33 00:11:22.475 clat percentiles (usec): 00:11:22.475 | 1.00th=[ 99], 5.00th=[ 112], 10.00th=[ 215], 20.00th=[ 297], 00:11:22.475 | 30.00th=[ 322], 40.00th=[ 343], 50.00th=[ 383], 60.00th=[ 429], 00:11:22.475 | 70.00th=[ 453], 80.00th=[ 494], 90.00th=[ 562], 95.00th=[ 594], 00:11:22.475 | 99.00th=[ 668], 99.50th=[ 709], 99.90th=[ 742], 99.95th=[ 742], 00:11:22.475 | 99.99th=[ 742] 00:11:22.475 bw ( KiB/s): min= 4096, max= 4096, per=37.79%, avg=4096.00, stdev= 0.00, samples=1 00:11:22.475 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:22.475 lat (usec) : 100=0.89%, 250=7.58%, 500=41.95%, 750=31.82%, 1000=17.65% 00:11:22.475 lat (msec) : 2=0.11% 00:11:22.475 cpu : usr=2.30%, sys=5.10%, ctx=1807, majf=0, minf=1 00:11:22.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.475 issued rwts: total=783,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.475 job1: (groupid=0, jobs=1): err= 0: pid=1559474: Sun Oct 13 14:07:25 2024 00:11:22.475 read: IOPS=381, BW=1525KiB/s (1562kB/s)(1528KiB/1002msec) 00:11:22.475 slat (nsec): min=6571, max=44644, avg=24675.93, stdev=6364.38 00:11:22.475 clat (usec): min=268, max=41819, avg=1965.09, stdev=6796.48 00:11:22.475 lat (usec): min=275, max=41845, avg=1989.77, stdev=6796.67 00:11:22.475 clat percentiles (usec): 00:11:22.475 | 1.00th=[ 383], 5.00th=[ 537], 10.00th=[ 594], 20.00th=[ 644], 00:11:22.475 | 30.00th=[ 701], 40.00th=[ 734], 50.00th=[ 766], 60.00th=[ 791], 00:11:22.475 | 70.00th=[ 824], 80.00th=[ 848], 90.00th=[ 922], 95.00th=[ 963], 00:11:22.475 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:11:22.475 | 99.99th=[41681] 00:11:22.475 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:11:22.475 slat (nsec): min=8705, max=51556, avg=29537.36, stdev=8608.71 00:11:22.475 clat (usec): min=142, max=817, avg=428.83, stdev=121.14 00:11:22.475 lat (usec): min=175, max=849, avg=458.37, stdev=124.15 00:11:22.475 clat percentiles (usec): 00:11:22.475 | 1.00th=[ 206], 5.00th=[ 249], 10.00th=[ 273], 20.00th=[ 314], 00:11:22.475 | 30.00th=[ 347], 40.00th=[ 392], 50.00th=[ 433], 60.00th=[ 461], 00:11:22.475 | 70.00th=[ 494], 80.00th=[ 537], 90.00th=[ 594], 95.00th=[ 627], 00:11:22.475 | 99.00th=[ 709], 99.50th=[ 750], 99.90th=[ 816], 99.95th=[ 816], 00:11:22.475 | 99.99th=[ 816] 00:11:22.475 bw ( KiB/s): min= 4096, max= 4096, per=37.79%, avg=4096.00, stdev= 0.00, samples=1 00:11:22.475 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:22.475 lat (usec) : 250=3.02%, 500=39.60%, 750=33.11%, 1000=22.37% 00:11:22.475 lat (msec) : 2=0.45%, 10=0.11%, 20=0.11%, 50=1.23% 00:11:22.475 cpu : usr=2.10%, sys=2.90%, ctx=894, majf=0, minf=1 00:11:22.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.475 issued rwts: total=382,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.475 job2: (groupid=0, jobs=1): err= 0: pid=1559493: Sun Oct 13 14:07:25 2024 00:11:22.475 read: IOPS=17, BW=69.9KiB/s (71.6kB/s)(72.0KiB/1030msec) 00:11:22.475 slat (nsec): min=27497, max=28417, avg=27766.44, stdev=214.89 00:11:22.475 clat (usec): min=1318, max=42014, avg=39682.99, stdev=9574.80 00:11:22.475 lat (usec): min=1346, max=42041, avg=39710.75, stdev=9574.83 00:11:22.475 clat percentiles (usec): 00:11:22.475 | 1.00th=[ 1319], 5.00th=[ 1319], 10.00th=[41681], 20.00th=[41681], 00:11:22.475 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:11:22.475 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:22.475 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:22.475 | 99.99th=[42206] 00:11:22.475 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:11:22.475 slat (nsec): min=9278, max=57840, avg=31239.70, stdev=10160.73 00:11:22.475 clat (usec): min=150, max=860, avg=575.39, stdev=122.78 00:11:22.475 lat (usec): min=159, max=894, avg=606.63, stdev=126.26 00:11:22.475 clat percentiles (usec): 00:11:22.475 | 1.00th=[ 289], 5.00th=[ 355], 10.00th=[ 416], 20.00th=[ 465], 00:11:22.475 | 30.00th=[ 502], 40.00th=[ 553], 50.00th=[ 594], 60.00th=[ 619], 00:11:22.475 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 758], 00:11:22.475 | 99.00th=[ 799], 99.50th=[ 832], 99.90th=[ 857], 99.95th=[ 857], 00:11:22.475 | 99.99th=[ 857] 00:11:22.475 bw ( KiB/s): min= 4096, max= 4096, per=37.79%, avg=4096.00, stdev= 0.00, samples=1 00:11:22.475 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:22.475 lat (usec) : 250=0.38%, 500=28.30%, 750=62.45%, 1000=5.47% 00:11:22.475 lat (msec) : 2=0.19%, 50=3.21% 00:11:22.475 cpu : usr=0.87%, sys=2.04%, ctx=531, majf=0, minf=1 00:11:22.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.475 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.475 job3: (groupid=0, jobs=1): err= 0: pid=1559500: Sun Oct 13 14:07:25 2024 00:11:22.475 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:22.475 slat (nsec): min=7435, max=44397, avg=26440.30, stdev=2775.59 00:11:22.475 clat (usec): min=526, max=1304, avg=994.72, stdev=110.17 00:11:22.475 lat (usec): min=553, max=1330, avg=1021.16, stdev=110.21 00:11:22.475 clat percentiles (usec): 00:11:22.475 | 1.00th=[ 668], 5.00th=[ 791], 10.00th=[ 865], 20.00th=[ 922], 00:11:22.475 | 30.00th=[ 955], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1029], 00:11:22.475 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1123], 95.00th=[ 1156], 00:11:22.475 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 1303], 99.95th=[ 1303], 00:11:22.475 | 99.99th=[ 1303] 00:11:22.475 write: IOPS=742, BW=2969KiB/s (3040kB/s)(2972KiB/1001msec); 0 zone resets 00:11:22.475 slat (nsec): min=9917, max=68560, avg=30952.73, stdev=8862.75 00:11:22.475 clat (usec): min=126, max=906, avg=597.84, stdev=128.10 00:11:22.475 lat (usec): min=136, max=939, avg=628.79, stdev=131.31 00:11:22.475 clat percentiles (usec): 00:11:22.475 | 1.00th=[ 255], 5.00th=[ 383], 10.00th=[ 437], 20.00th=[ 490], 00:11:22.475 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 627], 00:11:22.475 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 766], 95.00th=[ 799], 00:11:22.475 | 99.00th=[ 848], 99.50th=[ 889], 99.90th=[ 906], 99.95th=[ 906], 00:11:22.475 | 99.99th=[ 906] 00:11:22.475 bw ( KiB/s): min= 4096, max= 4096, per=37.79%, avg=4096.00, stdev= 0.00, samples=1 00:11:22.475 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:22.475 lat (usec) : 250=0.48%, 500=12.51%, 750=40.56%, 1000=24.94% 00:11:22.475 lat (msec) : 2=21.51% 00:11:22.475 cpu : usr=2.20%, sys=3.50%, ctx=1258, majf=0, minf=1 00:11:22.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.475 issued rwts: total=512,743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.475 00:11:22.475 Run status group 0 (all jobs): 00:11:22.475 READ: bw=6583KiB/s (6741kB/s), 69.9KiB/s-3129KiB/s (71.6kB/s-3204kB/s), io=6780KiB (6943kB), run=1001-1030msec 00:11:22.475 WRITE: bw=10.6MiB/s (11.1MB/s), 1988KiB/s-4092KiB/s (2036kB/s-4190kB/s), io=10.9MiB (11.4MB), run=1001-1030msec 00:11:22.475 00:11:22.475 Disk stats (read/write): 00:11:22.475 nvme0n1: ios=600/1024, merge=0/0, ticks=408/375, in_queue=783, util=86.47% 00:11:22.475 nvme0n2: ios=427/512, merge=0/0, ticks=643/182, in_queue=825, util=91.23% 00:11:22.475 nvme0n3: ios=68/512, merge=0/0, ticks=686/218, in_queue=904, util=93.24% 00:11:22.475 nvme0n4: ios=550/512, merge=0/0, ticks=948/289, in_queue=1237, util=94.55% 00:11:22.475 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:22.475 [global] 00:11:22.475 thread=1 00:11:22.475 invalidate=1 00:11:22.475 rw=write 00:11:22.475 time_based=1 00:11:22.475 runtime=1 00:11:22.475 ioengine=libaio 00:11:22.475 direct=1 00:11:22.475 bs=4096 00:11:22.476 iodepth=128 00:11:22.476 norandommap=0 00:11:22.476 numjobs=1 00:11:22.476 00:11:22.476 verify_dump=1 00:11:22.476 verify_backlog=512 00:11:22.476 verify_state_save=0 00:11:22.476 do_verify=1 00:11:22.476 verify=crc32c-intel 00:11:22.476 [job0] 00:11:22.476 filename=/dev/nvme0n1 00:11:22.476 [job1] 00:11:22.476 filename=/dev/nvme0n2 00:11:22.476 [job2] 00:11:22.476 filename=/dev/nvme0n3 00:11:22.476 [job3] 00:11:22.476 filename=/dev/nvme0n4 00:11:22.476 Could not set queue depth (nvme0n1) 00:11:22.476 Could not set queue depth (nvme0n2) 00:11:22.476 Could not set queue depth (nvme0n3) 00:11:22.476 Could not set queue depth (nvme0n4) 00:11:22.737 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.737 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.737 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.737 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.737 fio-3.35 00:11:22.737 Starting 4 threads 00:11:24.124 00:11:24.124 job0: (groupid=0, jobs=1): err= 0: pid=1559956: Sun Oct 13 14:07:27 2024 00:11:24.124 read: IOPS=7654, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec) 00:11:24.124 slat (nsec): min=937, max=5104.4k, avg=65517.22, stdev=415965.34 00:11:24.124 clat (usec): min=1233, max=14938, avg=8523.19, stdev=1306.72 00:11:24.124 lat (usec): min=2670, max=17056, avg=8588.71, stdev=1343.32 00:11:24.124 clat percentiles (usec): 00:11:24.124 | 1.00th=[ 5145], 5.00th=[ 6521], 10.00th=[ 7046], 20.00th=[ 7832], 00:11:24.124 | 30.00th=[ 8160], 40.00th=[ 8356], 50.00th=[ 8455], 60.00th=[ 8586], 00:11:24.124 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[10028], 95.00th=[11076], 00:11:24.124 | 99.00th=[12125], 99.50th=[13304], 99.90th=[14877], 99.95th=[14877], 00:11:24.124 | 99.99th=[14877] 00:11:24.124 write: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec); 0 zone resets 00:11:24.124 slat (nsec): min=1613, max=24241k, avg=58745.56, stdev=407764.53 00:11:24.124 clat (usec): min=1276, max=25522, avg=7632.90, stdev=1477.42 00:11:24.124 lat (usec): min=1286, max=25571, avg=7691.65, stdev=1508.89 00:11:24.124 clat percentiles (usec): 00:11:24.124 | 1.00th=[ 2671], 5.00th=[ 4621], 10.00th=[ 5735], 20.00th=[ 6915], 00:11:24.124 | 30.00th=[ 7373], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:11:24.124 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8979], 95.00th=[ 9634], 00:11:24.124 | 99.00th=[11207], 99.50th=[11600], 99.90th=[12387], 99.95th=[12911], 00:11:24.124 | 99.99th=[25560] 00:11:24.124 bw ( KiB/s): min=29312, max=32128, per=28.96%, avg=30720.00, stdev=1991.21, samples=2 00:11:24.124 iops : min= 7328, max= 8032, avg=7680.00, stdev=497.80, samples=2 00:11:24.124 lat (msec) : 2=0.24%, 4=1.34%, 10=91.45%, 20=6.96%, 50=0.01% 00:11:24.124 cpu : usr=5.19%, sys=7.69%, ctx=824, majf=0, minf=1 00:11:24.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:24.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.124 issued rwts: total=7670,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.124 job1: (groupid=0, jobs=1): err= 0: pid=1559990: Sun Oct 13 14:07:27 2024 00:11:24.124 read: IOPS=6181, BW=24.1MiB/s (25.3MB/s)(24.2MiB/1004msec) 00:11:24.124 slat (nsec): min=908, max=9455.3k, avg=76064.54, stdev=456343.74 00:11:24.124 clat (usec): min=1480, max=28915, avg=9260.87, stdev=2206.88 00:11:24.124 lat (usec): min=2854, max=28923, avg=9336.93, stdev=2218.54 00:11:24.124 clat percentiles (usec): 00:11:24.124 | 1.00th=[ 5342], 5.00th=[ 6390], 10.00th=[ 7046], 20.00th=[ 7439], 00:11:24.124 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[ 9896], 00:11:24.124 | 70.00th=[10028], 80.00th=[10421], 90.00th=[11076], 95.00th=[11863], 00:11:24.124 | 99.00th=[14746], 99.50th=[19268], 99.90th=[28967], 99.95th=[28967], 00:11:24.124 | 99.99th=[28967] 00:11:24.124 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:11:24.124 slat (nsec): min=1618, max=23181k, avg=75736.66, stdev=559966.52 00:11:24.124 clat (usec): min=1202, max=61927, avg=10043.40, stdev=8373.16 00:11:24.124 lat (usec): min=1212, max=61934, avg=10119.13, stdev=8421.00 00:11:24.124 clat percentiles (usec): 00:11:24.124 | 1.00th=[ 2147], 5.00th=[ 4490], 10.00th=[ 6456], 20.00th=[ 7111], 00:11:24.124 | 30.00th=[ 7373], 40.00th=[ 7898], 50.00th=[ 8225], 60.00th=[ 8586], 00:11:24.124 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10814], 95.00th=[21890], 00:11:24.124 | 99.00th=[53216], 99.50th=[57410], 99.90th=[62129], 99.95th=[62129], 00:11:24.124 | 99.99th=[62129] 00:11:24.124 bw ( KiB/s): min=23856, max=28864, per=24.85%, avg=26360.00, stdev=3541.19, samples=2 00:11:24.124 iops : min= 5964, max= 7216, avg=6590.00, stdev=885.30, samples=2 00:11:24.124 lat (msec) : 2=0.45%, 4=2.16%, 10=70.10%, 20=24.31%, 50=2.25% 00:11:24.124 lat (msec) : 100=0.72% 00:11:24.124 cpu : usr=4.09%, sys=4.29%, ctx=889, majf=0, minf=1 00:11:24.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:24.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.124 issued rwts: total=6206,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.124 job2: (groupid=0, jobs=1): err= 0: pid=1560029: Sun Oct 13 14:07:27 2024 00:11:24.124 read: IOPS=5155, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1004msec) 00:11:24.124 slat (nsec): min=923, max=47872k, avg=114606.76, stdev=1101970.08 00:11:24.124 clat (usec): min=2860, max=86241, avg=14624.95, stdev=16368.09 00:11:24.124 lat (usec): min=2865, max=86248, avg=14739.56, stdev=16465.96 00:11:24.124 clat percentiles (usec): 00:11:24.124 | 1.00th=[ 3261], 5.00th=[ 7308], 10.00th=[ 7898], 20.00th=[ 8455], 00:11:24.124 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:11:24.124 | 70.00th=[10421], 80.00th=[11076], 90.00th=[14484], 95.00th=[65274], 00:11:24.124 | 99.00th=[74974], 99.50th=[86508], 99.90th=[86508], 99.95th=[86508], 00:11:24.124 | 99.99th=[86508] 00:11:24.124 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:11:24.124 slat (nsec): min=1561, max=9549.0k, avg=67900.62, stdev=316726.88 00:11:24.124 clat (usec): min=4709, max=16080, avg=9108.74, stdev=1763.68 00:11:24.124 lat (usec): min=4719, max=20143, avg=9176.64, stdev=1778.04 00:11:24.124 clat percentiles (usec): 00:11:24.124 | 1.00th=[ 5211], 5.00th=[ 5669], 10.00th=[ 6783], 20.00th=[ 7963], 00:11:24.124 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9241], 00:11:24.124 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[11469], 95.00th=[12387], 00:11:24.124 | 99.00th=[13829], 99.50th=[13960], 99.90th=[15664], 99.95th=[15664], 00:11:24.124 | 99.99th=[16057] 00:11:24.124 bw ( KiB/s): min=16384, max=28104, per=20.97%, avg=22244.00, stdev=8287.29, samples=2 00:11:24.124 iops : min= 4096, max= 7026, avg=5561.00, stdev=2071.82, samples=2 00:11:24.124 lat (msec) : 4=0.49%, 10=70.74%, 20=24.30%, 50=0.65%, 100=3.82% 00:11:24.124 cpu : usr=3.09%, sys=5.38%, ctx=740, majf=0, minf=2 00:11:24.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:24.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.124 issued rwts: total=5176,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.124 job3: (groupid=0, jobs=1): err= 0: pid=1560042: Sun Oct 13 14:07:27 2024 00:11:24.124 read: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1004msec) 00:11:24.124 slat (nsec): min=1005, max=8624.1k, avg=78330.63, stdev=593743.10 00:11:24.124 clat (usec): min=2821, max=24301, avg=10031.74, stdev=2475.77 00:11:24.124 lat (usec): min=3777, max=29696, avg=10110.07, stdev=2517.11 00:11:24.124 clat percentiles (usec): 00:11:24.124 | 1.00th=[ 4817], 5.00th=[ 6915], 10.00th=[ 7963], 20.00th=[ 8717], 00:11:24.124 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9765], 00:11:24.124 | 70.00th=[10028], 80.00th=[11207], 90.00th=[13435], 95.00th=[15139], 00:11:24.124 | 99.00th=[18220], 99.50th=[20317], 99.90th=[22152], 99.95th=[22152], 00:11:24.124 | 99.99th=[24249] 00:11:24.124 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:11:24.124 slat (nsec): min=1660, max=35538k, avg=63227.64, stdev=515903.07 00:11:24.124 clat (usec): min=1056, max=46422, avg=8328.41, stdev=2016.79 00:11:24.124 lat (usec): min=1065, max=46432, avg=8391.64, stdev=2082.84 00:11:24.124 clat percentiles (usec): 00:11:24.124 | 1.00th=[ 3228], 5.00th=[ 4752], 10.00th=[ 5538], 20.00th=[ 7046], 00:11:24.124 | 30.00th=[ 8029], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 9110], 00:11:24.124 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[10290], 00:11:24.124 | 99.00th=[11863], 99.50th=[12125], 99.90th=[17171], 99.95th=[44303], 00:11:24.124 | 99.99th=[46400] 00:11:24.124 bw ( KiB/s): min=26576, max=26672, per=25.10%, avg=26624.00, stdev=67.88, samples=2 00:11:24.124 iops : min= 6644, max= 6668, avg=6656.00, stdev=16.97, samples=2 00:11:24.124 lat (msec) : 2=0.02%, 4=1.20%, 10=79.95%, 20=18.32%, 50=0.51% 00:11:24.124 cpu : usr=4.29%, sys=6.68%, ctx=782, majf=0, minf=2 00:11:24.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:24.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.124 issued rwts: total=6649,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.124 00:11:24.124 Run status group 0 (all jobs): 00:11:24.124 READ: bw=100.0MiB/s (105MB/s), 20.1MiB/s-29.9MiB/s (21.1MB/s-31.4MB/s), io=100MiB (105MB), run=1002-1004msec 00:11:24.124 WRITE: bw=104MiB/s (109MB/s), 21.9MiB/s-29.9MiB/s (23.0MB/s-31.4MB/s), io=104MiB (109MB), run=1002-1004msec 00:11:24.124 00:11:24.124 Disk stats (read/write): 00:11:24.124 nvme0n1: ios=5653/6135, merge=0/0, ticks=31918/27897, in_queue=59815, util=85.47% 00:11:24.125 nvme0n2: ios=5291/5632, merge=0/0, ticks=24223/22232, in_queue=46455, util=89.28% 00:11:24.125 nvme0n3: ios=3640/4038, merge=0/0, ticks=22735/16074, in_queue=38809, util=90.25% 00:11:24.125 nvme0n4: ios=4909/5120, merge=0/0, ticks=41279/35317, in_queue=76596, util=99.65% 00:11:24.125 14:07:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:24.125 [global] 00:11:24.125 thread=1 00:11:24.125 invalidate=1 00:11:24.125 rw=randwrite 00:11:24.125 time_based=1 00:11:24.125 runtime=1 00:11:24.125 ioengine=libaio 00:11:24.125 direct=1 00:11:24.125 bs=4096 00:11:24.125 iodepth=128 00:11:24.125 norandommap=0 00:11:24.125 numjobs=1 00:11:24.125 00:11:24.125 verify_dump=1 00:11:24.125 verify_backlog=512 00:11:24.125 verify_state_save=0 00:11:24.125 do_verify=1 00:11:24.125 verify=crc32c-intel 00:11:24.125 [job0] 00:11:24.125 filename=/dev/nvme0n1 00:11:24.125 [job1] 00:11:24.125 filename=/dev/nvme0n2 00:11:24.125 [job2] 00:11:24.125 filename=/dev/nvme0n3 00:11:24.125 [job3] 00:11:24.125 filename=/dev/nvme0n4 00:11:24.125 Could not set queue depth (nvme0n1) 00:11:24.125 Could not set queue depth (nvme0n2) 00:11:24.125 Could not set queue depth (nvme0n3) 00:11:24.125 Could not set queue depth (nvme0n4) 00:11:24.385 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.385 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.385 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.385 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.385 fio-3.35 00:11:24.385 Starting 4 threads 00:11:25.771 00:11:25.771 job0: (groupid=0, jobs=1): err= 0: pid=1560493: Sun Oct 13 14:07:29 2024 00:11:25.771 read: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec) 00:11:25.771 slat (nsec): min=977, max=6952.7k, avg=58292.71, stdev=425016.38 00:11:25.771 clat (usec): min=2807, max=14105, avg=7705.97, stdev=1641.15 00:11:25.771 lat (usec): min=2813, max=15895, avg=7764.26, stdev=1672.12 00:11:25.771 clat percentiles (usec): 00:11:25.771 | 1.00th=[ 5080], 5.00th=[ 5735], 10.00th=[ 6194], 20.00th=[ 6652], 00:11:25.771 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7439], 00:11:25.771 | 70.00th=[ 7832], 80.00th=[ 8848], 90.00th=[10028], 95.00th=[11338], 00:11:25.771 | 99.00th=[12911], 99.50th=[13173], 99.90th=[13698], 99.95th=[13829], 00:11:25.771 | 99.99th=[14091] 00:11:25.771 write: IOPS=8854, BW=34.6MiB/s (36.3MB/s)(34.7MiB/1003msec); 0 zone resets 00:11:25.771 slat (nsec): min=1573, max=16706k, avg=49913.93, stdev=406013.37 00:11:25.771 clat (usec): min=1132, max=24179, avg=6768.18, stdev=2010.02 00:11:25.771 lat (usec): min=1142, max=24195, avg=6818.10, stdev=2036.47 00:11:25.771 clat percentiles (usec): 00:11:25.771 | 1.00th=[ 2737], 5.00th=[ 4080], 10.00th=[ 4424], 20.00th=[ 5473], 00:11:25.771 | 30.00th=[ 6390], 40.00th=[ 6718], 50.00th=[ 6915], 60.00th=[ 7111], 00:11:25.771 | 70.00th=[ 7242], 80.00th=[ 7373], 90.00th=[ 7832], 95.00th=[ 9503], 00:11:25.771 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18220], 99.95th=[18220], 00:11:25.771 | 99.99th=[24249] 00:11:25.771 bw ( KiB/s): min=33864, max=36184, per=34.70%, avg=35024.00, stdev=1640.49, samples=2 00:11:25.771 iops : min= 8466, max= 9046, avg=8756.00, stdev=410.12, samples=2 00:11:25.771 lat (msec) : 2=0.01%, 4=2.60%, 10=91.33%, 20=6.05%, 50=0.01% 00:11:25.771 cpu : usr=5.69%, sys=9.68%, ctx=568, majf=0, minf=1 00:11:25.771 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:25.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.771 issued rwts: total=8704,8881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.771 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.771 job1: (groupid=0, jobs=1): err= 0: pid=1560504: Sun Oct 13 14:07:29 2024 00:11:25.771 read: IOPS=3942, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1005msec) 00:11:25.771 slat (nsec): min=983, max=18878k, avg=147035.59, stdev=1061171.68 00:11:25.771 clat (usec): min=1715, max=47912, avg=17007.41, stdev=9002.00 00:11:25.771 lat (usec): min=3486, max=47927, avg=17154.45, stdev=9057.57 00:11:25.771 clat percentiles (usec): 00:11:25.771 | 1.00th=[ 4621], 5.00th=[ 7898], 10.00th=[ 8586], 20.00th=[ 8717], 00:11:25.771 | 30.00th=[10159], 40.00th=[13698], 50.00th=[15401], 60.00th=[16581], 00:11:25.771 | 70.00th=[20055], 80.00th=[21890], 90.00th=[28967], 95.00th=[35390], 00:11:25.771 | 99.00th=[45876], 99.50th=[46924], 99.90th=[47973], 99.95th=[47973], 00:11:25.771 | 99.99th=[47973] 00:11:25.771 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:11:25.771 slat (nsec): min=1655, max=14655k, avg=94428.04, stdev=430592.49 00:11:25.771 clat (usec): min=1241, max=47906, avg=14656.90, stdev=5406.18 00:11:25.771 lat (usec): min=1253, max=47917, avg=14751.32, stdev=5437.37 00:11:25.771 clat percentiles (usec): 00:11:25.771 | 1.00th=[ 2933], 5.00th=[ 5604], 10.00th=[ 7308], 20.00th=[11600], 00:11:25.771 | 30.00th=[14353], 40.00th=[14877], 50.00th=[15008], 60.00th=[15139], 00:11:25.771 | 70.00th=[15270], 80.00th=[15401], 90.00th=[18482], 95.00th=[26870], 00:11:25.771 | 99.00th=[33817], 99.50th=[34866], 99.90th=[44827], 99.95th=[47449], 00:11:25.771 | 99.99th=[47973] 00:11:25.771 bw ( KiB/s): min=15376, max=17392, per=16.23%, avg=16384.00, stdev=1425.53, samples=2 00:11:25.771 iops : min= 3844, max= 4348, avg=4096.00, stdev=356.38, samples=2 00:11:25.771 lat (msec) : 2=0.11%, 4=1.25%, 10=21.02%, 20=57.97%, 50=19.65% 00:11:25.771 cpu : usr=3.69%, sys=3.39%, ctx=509, majf=0, minf=1 00:11:25.771 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:25.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.772 issued rwts: total=3962,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.772 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.772 job2: (groupid=0, jobs=1): err= 0: pid=1560522: Sun Oct 13 14:07:29 2024 00:11:25.772 read: IOPS=4200, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1005msec) 00:11:25.772 slat (nsec): min=1015, max=13965k, avg=118233.17, stdev=874950.23 00:11:25.772 clat (usec): min=1373, max=40443, avg=14060.55, stdev=4709.40 00:11:25.772 lat (usec): min=4237, max=40446, avg=14178.79, stdev=4763.64 00:11:25.772 clat percentiles (usec): 00:11:25.772 | 1.00th=[ 7439], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[10159], 00:11:25.772 | 30.00th=[11863], 40.00th=[13173], 50.00th=[13698], 60.00th=[14353], 00:11:25.772 | 70.00th=[15008], 80.00th=[15664], 90.00th=[18744], 95.00th=[21627], 00:11:25.772 | 99.00th=[35914], 99.50th=[37487], 99.90th=[40633], 99.95th=[40633], 00:11:25.772 | 99.99th=[40633] 00:11:25.772 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:11:25.772 slat (nsec): min=1685, max=8595.4k, avg=104416.24, stdev=489628.31 00:11:25.772 clat (usec): min=1219, max=40447, avg=14765.13, stdev=4403.53 00:11:25.772 lat (usec): min=1228, max=40449, avg=14869.54, stdev=4422.22 00:11:25.772 clat percentiles (usec): 00:11:25.772 | 1.00th=[ 5473], 5.00th=[ 8356], 10.00th=[ 8717], 20.00th=[11338], 00:11:25.772 | 30.00th=[14222], 40.00th=[15008], 50.00th=[15139], 60.00th=[15139], 00:11:25.772 | 70.00th=[15270], 80.00th=[15664], 90.00th=[20055], 95.00th=[23200], 00:11:25.772 | 99.00th=[29492], 99.50th=[31065], 99.90th=[32375], 99.95th=[32375], 00:11:25.772 | 99.99th=[40633] 00:11:25.772 bw ( KiB/s): min=17776, max=19072, per=18.25%, avg=18424.00, stdev=916.41, samples=2 00:11:25.772 iops : min= 4444, max= 4768, avg=4606.00, stdev=229.10, samples=2 00:11:25.772 lat (msec) : 2=0.11%, 4=0.20%, 10=16.73%, 20=74.01%, 50=8.95% 00:11:25.772 cpu : usr=2.59%, sys=5.18%, ctx=534, majf=0, minf=2 00:11:25.772 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:25.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.772 issued rwts: total=4222,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.772 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.772 job3: (groupid=0, jobs=1): err= 0: pid=1560529: Sun Oct 13 14:07:29 2024 00:11:25.772 read: IOPS=7634, BW=29.8MiB/s (31.3MB/s)(30.0MiB/1006msec) 00:11:25.772 slat (nsec): min=954, max=7977.5k, avg=69481.85, stdev=524968.74 00:11:25.772 clat (usec): min=2468, max=16976, avg=8988.37, stdev=2115.94 00:11:25.772 lat (usec): min=2473, max=16992, avg=9057.85, stdev=2142.49 00:11:25.772 clat percentiles (usec): 00:11:25.772 | 1.00th=[ 4228], 5.00th=[ 6194], 10.00th=[ 6980], 20.00th=[ 7635], 00:11:25.772 | 30.00th=[ 7963], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8979], 00:11:25.772 | 70.00th=[ 9372], 80.00th=[10552], 90.00th=[12256], 95.00th=[13435], 00:11:25.772 | 99.00th=[14877], 99.50th=[15139], 99.90th=[15795], 99.95th=[15926], 00:11:25.772 | 99.99th=[16909] 00:11:25.772 write: IOPS=7753, BW=30.3MiB/s (31.8MB/s)(30.5MiB/1006msec); 0 zone resets 00:11:25.772 slat (nsec): min=1609, max=7317.4k, avg=54632.34, stdev=317718.48 00:11:25.772 clat (usec): min=1214, max=16046, avg=7512.79, stdev=1758.76 00:11:25.772 lat (usec): min=1223, max=16063, avg=7567.42, stdev=1774.31 00:11:25.772 clat percentiles (usec): 00:11:25.772 | 1.00th=[ 2606], 5.00th=[ 3916], 10.00th=[ 5014], 20.00th=[ 6128], 00:11:25.772 | 30.00th=[ 7373], 40.00th=[ 7832], 50.00th=[ 8094], 60.00th=[ 8291], 00:11:25.772 | 70.00th=[ 8356], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 9896], 00:11:25.772 | 99.00th=[11076], 99.50th=[11600], 99.90th=[15795], 99.95th=[15795], 00:11:25.772 | 99.99th=[16057] 00:11:25.772 bw ( KiB/s): min=28936, max=32560, per=30.46%, avg=30748.00, stdev=2562.55, samples=2 00:11:25.772 iops : min= 7234, max= 8140, avg=7687.00, stdev=640.64, samples=2 00:11:25.772 lat (msec) : 2=0.12%, 4=2.85%, 10=83.68%, 20=13.35% 00:11:25.772 cpu : usr=5.87%, sys=7.06%, ctx=787, majf=0, minf=2 00:11:25.772 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:25.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.772 issued rwts: total=7680,7800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.772 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.772 00:11:25.772 Run status group 0 (all jobs): 00:11:25.772 READ: bw=95.4MiB/s (100MB/s), 15.4MiB/s-33.9MiB/s (16.1MB/s-35.5MB/s), io=96.0MiB (101MB), run=1003-1006msec 00:11:25.772 WRITE: bw=98.6MiB/s (103MB/s), 15.9MiB/s-34.6MiB/s (16.7MB/s-36.3MB/s), io=99.2MiB (104MB), run=1003-1006msec 00:11:25.772 00:11:25.772 Disk stats (read/write): 00:11:25.772 nvme0n1: ios=7218/7503, merge=0/0, ticks=51697/48405, in_queue=100102, util=87.37% 00:11:25.772 nvme0n2: ios=3215/3584, merge=0/0, ticks=53084/49641, in_queue=102725, util=88.18% 00:11:25.772 nvme0n3: ios=3617/3673, merge=0/0, ticks=47904/54964, in_queue=102868, util=92.29% 00:11:25.772 nvme0n4: ios=6201/6656, merge=0/0, ticks=52513/48593, in_queue=101106, util=97.12% 00:11:25.772 14:07:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:25.772 14:07:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1560622 00:11:25.772 14:07:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:25.772 14:07:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:25.772 [global] 00:11:25.772 thread=1 00:11:25.772 invalidate=1 00:11:25.772 rw=read 00:11:25.772 time_based=1 00:11:25.772 runtime=10 00:11:25.772 ioengine=libaio 00:11:25.772 direct=1 00:11:25.772 bs=4096 00:11:25.772 iodepth=1 00:11:25.772 norandommap=1 00:11:25.772 numjobs=1 00:11:25.772 00:11:25.772 [job0] 00:11:25.772 filename=/dev/nvme0n1 00:11:25.772 [job1] 00:11:25.772 filename=/dev/nvme0n2 00:11:25.772 [job2] 00:11:25.772 filename=/dev/nvme0n3 00:11:25.772 [job3] 00:11:25.772 filename=/dev/nvme0n4 00:11:25.772 Could not set queue depth (nvme0n1) 00:11:25.772 Could not set queue depth (nvme0n2) 00:11:25.772 Could not set queue depth (nvme0n3) 00:11:25.772 Could not set queue depth (nvme0n4) 00:11:26.032 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:26.032 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:26.032 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:26.032 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:26.032 fio-3.35 00:11:26.032 Starting 4 threads 00:11:29.330 14:07:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:29.330 14:07:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:29.330 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:11:29.330 fio: pid=1560994, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:29.330 14:07:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.330 14:07:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:29.330 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2781184, buflen=4096 00:11:29.330 fio: pid=1560987, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:29.330 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=5697536, buflen=4096 00:11:29.330 fio: pid=1560962, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:29.330 14:07:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.330 14:07:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:29.330 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.330 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:29.592 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=315392, buflen=4096 00:11:29.592 fio: pid=1560972, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:29.592 00:11:29.592 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1560962: Sun Oct 13 14:07:33 2024 00:11:29.592 read: IOPS=472, BW=1887KiB/s (1932kB/s)(5564KiB/2949msec) 00:11:29.592 slat (usec): min=6, max=24093, avg=54.32, stdev=732.87 00:11:29.592 clat (usec): min=156, max=41659, avg=2044.10, stdev=7488.95 00:11:29.592 lat (usec): min=163, max=41684, avg=2098.44, stdev=7519.95 00:11:29.592 clat percentiles (usec): 00:11:29.592 | 1.00th=[ 210], 5.00th=[ 310], 10.00th=[ 367], 20.00th=[ 482], 00:11:29.592 | 30.00th=[ 537], 40.00th=[ 594], 50.00th=[ 635], 60.00th=[ 693], 00:11:29.592 | 70.00th=[ 725], 80.00th=[ 766], 90.00th=[ 816], 95.00th=[ 922], 00:11:29.592 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:29.592 | 99.99th=[41681] 00:11:29.592 bw ( KiB/s): min= 1224, max= 2408, per=63.22%, avg=1768.00, stdev=470.98, samples=5 00:11:29.592 iops : min= 306, max= 602, avg=442.00, stdev=117.75, samples=5 00:11:29.592 lat (usec) : 250=2.23%, 500=20.55%, 750=54.31%, 1000=18.89% 00:11:29.592 lat (msec) : 2=0.36%, 4=0.07%, 50=3.52% 00:11:29.592 cpu : usr=0.34%, sys=1.46%, ctx=1396, majf=0, minf=1 00:11:29.592 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.592 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.592 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.592 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.592 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1560972: Sun Oct 13 14:07:33 2024 00:11:29.592 read: IOPS=24, BW=97.4KiB/s (99.8kB/s)(308KiB/3161msec) 00:11:29.592 slat (usec): min=24, max=261, avg=30.93, stdev=34.56 00:11:29.592 clat (usec): min=712, max=42089, avg=40727.36, stdev=6585.69 00:11:29.592 lat (usec): min=748, max=42114, avg=40758.36, stdev=6585.47 00:11:29.592 clat percentiles (usec): 00:11:29.592 | 1.00th=[ 709], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:11:29.592 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:11:29.593 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:29.593 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:29.593 | 99.99th=[42206] 00:11:29.593 bw ( KiB/s): min= 96, max= 104, per=3.47%, avg=97.33, stdev= 3.27, samples=6 00:11:29.593 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:11:29.593 lat (usec) : 750=2.56% 00:11:29.593 lat (msec) : 50=96.15% 00:11:29.593 cpu : usr=0.09%, sys=0.00%, ctx=80, majf=0, minf=2 00:11:29.593 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.593 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.593 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.593 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.593 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1560987: Sun Oct 13 14:07:33 2024 00:11:29.593 read: IOPS=242, BW=970KiB/s (994kB/s)(2716KiB/2799msec) 00:11:29.593 slat (usec): min=6, max=7415, avg=35.77, stdev=283.58 00:11:29.593 clat (usec): min=180, max=42064, avg=4047.86, stdev=11025.90 00:11:29.593 lat (usec): min=188, max=42090, avg=4083.65, stdev=11026.39 00:11:29.593 clat percentiles (usec): 00:11:29.593 | 1.00th=[ 408], 5.00th=[ 545], 10.00th=[ 611], 20.00th=[ 709], 00:11:29.593 | 30.00th=[ 766], 40.00th=[ 807], 50.00th=[ 840], 60.00th=[ 873], 00:11:29.593 | 70.00th=[ 930], 80.00th=[ 971], 90.00th=[ 1045], 95.00th=[41157], 00:11:29.593 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:29.593 | 99.99th=[42206] 00:11:29.593 bw ( KiB/s): min= 136, max= 3216, per=37.69%, avg=1054.40, stdev=1314.77, samples=5 00:11:29.593 iops : min= 34, max= 804, avg=263.60, stdev=328.69, samples=5 00:11:29.593 lat (usec) : 250=0.15%, 500=1.76%, 750=23.97%, 1000=59.85% 00:11:29.593 lat (msec) : 2=6.18%, 50=7.94% 00:11:29.593 cpu : usr=0.18%, sys=0.79%, ctx=682, majf=0, minf=2 00:11:29.593 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.593 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.593 issued rwts: total=680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.593 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.593 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1560994: Sun Oct 13 14:07:33 2024 00:11:29.593 read: IOPS=24, BW=96.4KiB/s (98.7kB/s)(252KiB/2615msec) 00:11:29.593 slat (nsec): min=8889, max=36065, avg=26896.23, stdev=2864.56 00:11:29.593 clat (usec): min=612, max=42870, avg=41134.25, stdev=5205.17 00:11:29.593 lat (usec): min=648, max=42902, avg=41161.15, stdev=5204.02 00:11:29.593 clat percentiles (usec): 00:11:29.593 | 1.00th=[ 611], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:29.593 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:29.593 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:29.593 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:29.593 | 99.99th=[42730] 00:11:29.593 bw ( KiB/s): min= 96, max= 96, per=3.43%, avg=96.00, stdev= 0.00, samples=5 00:11:29.593 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:11:29.593 lat (usec) : 750=1.56% 00:11:29.593 lat (msec) : 50=96.88% 00:11:29.593 cpu : usr=0.00%, sys=0.15%, ctx=65, majf=0, minf=2 00:11:29.593 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.593 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.593 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.593 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.593 00:11:29.593 Run status group 0 (all jobs): 00:11:29.593 READ: bw=2797KiB/s (2864kB/s), 96.4KiB/s-1887KiB/s (98.7kB/s-1932kB/s), io=8840KiB (9052kB), run=2615-3161msec 00:11:29.593 00:11:29.593 Disk stats (read/write): 00:11:29.593 nvme0n1: ios=1229/0, merge=0/0, ticks=2739/0, in_queue=2739, util=93.32% 00:11:29.593 nvme0n2: ios=75/0, merge=0/0, ticks=3055/0, in_queue=3055, util=95.63% 00:11:29.593 nvme0n3: ios=661/0, merge=0/0, ticks=2515/0, in_queue=2515, util=96.03% 00:11:29.593 nvme0n4: ios=62/0, merge=0/0, ticks=2552/0, in_queue=2552, util=96.42% 00:11:29.593 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.593 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:29.854 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.854 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:30.115 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:30.115 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:30.115 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:30.115 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:30.376 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:30.376 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1560622 00:11:30.376 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:30.376 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:30.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.376 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:30.376 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:30.376 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:30.376 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.376 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.376 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:30.376 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:30.376 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:30.376 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:30.376 nvmf hotplug test: fio failed as expected 00:11:30.376 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:30.637 rmmod nvme_tcp 00:11:30.637 rmmod nvme_fabrics 00:11:30.637 rmmod nvme_keyring 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1557083 ']' 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1557083 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1557083 ']' 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1557083 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:30.637 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1557083 00:11:30.898 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:30.898 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:30.898 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1557083' 00:11:30.898 killing process with pid 1557083 00:11:30.898 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1557083 00:11:30.898 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1557083 00:11:30.898 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:30.898 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:30.898 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:30.898 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:30.898 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:11:30.898 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:30.898 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:11:30.898 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:30.898 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:30.898 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.898 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.898 14:07:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:33.443 00:11:33.443 real 0m29.573s 00:11:33.443 user 2m35.068s 00:11:33.443 sys 0m9.693s 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.443 ************************************ 00:11:33.443 END TEST nvmf_fio_target 00:11:33.443 ************************************ 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:33.443 ************************************ 00:11:33.443 START TEST nvmf_bdevio 00:11:33.443 ************************************ 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:33.443 * Looking for test storage... 00:11:33.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:33.443 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:33.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.444 --rc genhtml_branch_coverage=1 00:11:33.444 --rc genhtml_function_coverage=1 00:11:33.444 --rc genhtml_legend=1 00:11:33.444 --rc geninfo_all_blocks=1 00:11:33.444 --rc geninfo_unexecuted_blocks=1 00:11:33.444 00:11:33.444 ' 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:33.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.444 --rc genhtml_branch_coverage=1 00:11:33.444 --rc genhtml_function_coverage=1 00:11:33.444 --rc genhtml_legend=1 00:11:33.444 --rc geninfo_all_blocks=1 00:11:33.444 --rc geninfo_unexecuted_blocks=1 00:11:33.444 00:11:33.444 ' 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:33.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.444 --rc genhtml_branch_coverage=1 00:11:33.444 --rc genhtml_function_coverage=1 00:11:33.444 --rc genhtml_legend=1 00:11:33.444 --rc geninfo_all_blocks=1 00:11:33.444 --rc geninfo_unexecuted_blocks=1 00:11:33.444 00:11:33.444 ' 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:33.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.444 --rc genhtml_branch_coverage=1 00:11:33.444 --rc genhtml_function_coverage=1 00:11:33.444 --rc genhtml_legend=1 00:11:33.444 --rc geninfo_all_blocks=1 00:11:33.444 --rc geninfo_unexecuted_blocks=1 00:11:33.444 00:11:33.444 ' 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:33.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:33.444 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.586 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.586 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:41.586 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:41.586 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:41.586 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:41.586 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:41.586 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:41.586 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:41.586 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:41.586 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:41.586 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:41.586 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:41.586 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:41.586 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:41.586 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:41.586 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:41.587 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:41.587 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:41.587 Found net devices under 0000:31:00.0: cvl_0_0 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:41.587 Found net devices under 0000:31:00.1: cvl_0_1 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:41.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:11:41.587 00:11:41.587 --- 10.0.0.2 ping statistics --- 00:11:41.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.587 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:11:41.587 00:11:41.587 --- 10.0.0.1 ping statistics --- 00:11:41.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.587 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1566218 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1566218 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1566218 ']' 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:41.587 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.588 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:41.588 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.588 [2024-10-13 14:07:44.660737] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:11:41.588 [2024-10-13 14:07:44.660800] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.588 [2024-10-13 14:07:44.802728] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:41.588 [2024-10-13 14:07:44.851055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.588 [2024-10-13 14:07:44.878606] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.588 [2024-10-13 14:07:44.878652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.588 [2024-10-13 14:07:44.878661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.588 [2024-10-13 14:07:44.878668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.588 [2024-10-13 14:07:44.878675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.588 [2024-10-13 14:07:44.880924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:41.588 [2024-10-13 14:07:44.881111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:41.588 [2024-10-13 14:07:44.881274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.588 [2024-10-13 14:07:44.881275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:41.849 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:41.849 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:41.849 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:41.849 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:41.849 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.849 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.849 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:41.849 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.849 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.849 [2024-10-13 14:07:45.539881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.849 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.849 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:41.849 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.849 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.111 Malloc0 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.111 [2024-10-13 14:07:45.616278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:42.111 { 00:11:42.111 "params": { 00:11:42.111 "name": "Nvme$subsystem", 00:11:42.111 "trtype": "$TEST_TRANSPORT", 00:11:42.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:42.111 "adrfam": "ipv4", 00:11:42.111 "trsvcid": "$NVMF_PORT", 00:11:42.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:42.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:42.111 "hdgst": ${hdgst:-false}, 00:11:42.111 "ddgst": ${ddgst:-false} 00:11:42.111 }, 00:11:42.111 "method": "bdev_nvme_attach_controller" 00:11:42.111 } 00:11:42.111 EOF 00:11:42.111 )") 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:11:42.111 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:42.111 "params": { 00:11:42.111 "name": "Nvme1", 00:11:42.111 "trtype": "tcp", 00:11:42.111 "traddr": "10.0.0.2", 00:11:42.111 "adrfam": "ipv4", 00:11:42.111 "trsvcid": "4420", 00:11:42.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:42.111 "hdgst": false, 00:11:42.111 "ddgst": false 00:11:42.111 }, 00:11:42.111 "method": "bdev_nvme_attach_controller" 00:11:42.111 }' 00:11:42.111 [2024-10-13 14:07:45.673296] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:11:42.111 [2024-10-13 14:07:45.673361] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1566442 ] 00:11:42.111 [2024-10-13 14:07:45.808634] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:42.372 [2024-10-13 14:07:45.855222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:42.372 [2024-10-13 14:07:45.887511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.372 [2024-10-13 14:07:45.887667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.372 [2024-10-13 14:07:45.887668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.634 I/O targets: 00:11:42.634 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:42.634 00:11:42.634 00:11:42.634 CUnit - A unit testing framework for C - Version 2.1-3 00:11:42.634 http://cunit.sourceforge.net/ 00:11:42.634 00:11:42.634 00:11:42.634 Suite: bdevio tests on: Nvme1n1 00:11:42.634 Test: blockdev write read block ...passed 00:11:42.634 Test: blockdev write zeroes read block ...passed 00:11:42.634 Test: blockdev write zeroes read no split ...passed 00:11:42.634 Test: blockdev write zeroes read split ...passed 00:11:42.634 Test: blockdev write zeroes read split partial ...passed 00:11:42.634 Test: blockdev reset ...[2024-10-13 14:07:46.260618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:42.634 [2024-10-13 14:07:46.260721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ee750 (9): Bad file descriptor 00:11:42.634 [2024-10-13 14:07:46.315108] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:42.634 passed 00:11:42.634 Test: blockdev write read 8 blocks ...passed 00:11:42.634 Test: blockdev write read size > 128k ...passed 00:11:42.634 Test: blockdev write read invalid size ...passed 00:11:42.895 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:42.895 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:42.895 Test: blockdev write read max offset ...passed 00:11:42.895 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:42.895 Test: blockdev writev readv 8 blocks ...passed 00:11:42.895 Test: blockdev writev readv 30 x 1block ...passed 00:11:42.895 Test: blockdev writev readv block ...passed 00:11:42.895 Test: blockdev writev readv size > 128k ...passed 00:11:42.895 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:42.895 Test: blockdev comparev and writev ...[2024-10-13 14:07:46.576615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.895 [2024-10-13 14:07:46.576671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:42.895 [2024-10-13 14:07:46.576689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.895 [2024-10-13 14:07:46.576699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:42.895 [2024-10-13 14:07:46.577167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.895 [2024-10-13 14:07:46.577184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:42.895 [2024-10-13 14:07:46.577202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.895 [2024-10-13 14:07:46.577213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:42.895 [2024-10-13 14:07:46.577650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.895 [2024-10-13 14:07:46.577665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:42.895 [2024-10-13 14:07:46.577680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.895 [2024-10-13 14:07:46.577690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:42.895 [2024-10-13 14:07:46.578115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.895 [2024-10-13 14:07:46.578130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:42.895 [2024-10-13 14:07:46.578146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:42.895 [2024-10-13 14:07:46.578155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:43.156 passed 00:11:43.156 Test: blockdev nvme passthru rw ...passed 00:11:43.156 Test: blockdev nvme passthru vendor specific ...[2024-10-13 14:07:46.662587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:43.156 [2024-10-13 14:07:46.662609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:43.156 [2024-10-13 14:07:46.662768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:43.156 [2024-10-13 14:07:46.662781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:43.156 [2024-10-13 14:07:46.662960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:43.156 [2024-10-13 14:07:46.662972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:43.156 [2024-10-13 14:07:46.663155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:43.156 [2024-10-13 14:07:46.663169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:43.156 passed 00:11:43.156 Test: blockdev nvme admin passthru ...passed 00:11:43.156 Test: blockdev copy ...passed 00:11:43.156 00:11:43.156 Run Summary: Type Total Ran Passed Failed Inactive 00:11:43.156 suites 1 1 n/a 0 0 00:11:43.156 tests 23 23 23 0 0 00:11:43.156 asserts 152 152 152 0 n/a 00:11:43.156 00:11:43.156 Elapsed time = 1.279 seconds 00:11:43.156 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:43.156 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.156 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.156 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.156 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:43.156 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:43.156 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:43.156 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:43.156 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:43.156 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:43.156 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.156 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:43.417 rmmod nvme_tcp 00:11:43.417 rmmod nvme_fabrics 00:11:43.417 rmmod nvme_keyring 00:11:43.417 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:43.417 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:43.417 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:43.417 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1566218 ']' 00:11:43.417 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1566218 00:11:43.417 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1566218 ']' 00:11:43.417 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1566218 00:11:43.417 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:43.417 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:43.417 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1566218 00:11:43.417 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:43.417 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:43.417 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1566218' 00:11:43.417 killing process with pid 1566218 00:11:43.417 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1566218 00:11:43.417 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1566218 00:11:43.677 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:43.677 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:43.677 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:43.677 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:43.677 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:43.677 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:11:43.677 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:11:43.677 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.677 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:43.677 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.677 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.677 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.637 14:07:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:45.637 00:11:45.637 real 0m12.602s 00:11:45.637 user 0m13.661s 00:11:45.637 sys 0m6.317s 00:11:45.637 14:07:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.637 14:07:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:45.637 ************************************ 00:11:45.637 END TEST nvmf_bdevio 00:11:45.637 ************************************ 00:11:45.637 14:07:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:45.637 00:11:45.637 real 5m8.052s 00:11:45.637 user 11m49.790s 00:11:45.637 sys 1m53.316s 00:11:45.637 14:07:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.637 14:07:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:45.637 ************************************ 00:11:45.637 END TEST nvmf_target_core 00:11:45.637 ************************************ 00:11:45.931 14:07:49 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:45.931 14:07:49 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:45.931 14:07:49 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.931 14:07:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:45.931 ************************************ 00:11:45.931 START TEST nvmf_target_extra 00:11:45.931 ************************************ 00:11:45.931 14:07:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:45.931 * Looking for test storage... 00:11:45.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:45.931 14:07:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:45.931 14:07:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:11:45.931 14:07:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:45.931 14:07:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:45.931 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.931 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.931 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.931 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.931 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.931 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:45.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.932 --rc genhtml_branch_coverage=1 00:11:45.932 --rc genhtml_function_coverage=1 00:11:45.932 --rc genhtml_legend=1 00:11:45.932 --rc geninfo_all_blocks=1 00:11:45.932 --rc geninfo_unexecuted_blocks=1 00:11:45.932 00:11:45.932 ' 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:45.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.932 --rc genhtml_branch_coverage=1 00:11:45.932 --rc genhtml_function_coverage=1 00:11:45.932 --rc genhtml_legend=1 00:11:45.932 --rc geninfo_all_blocks=1 00:11:45.932 --rc geninfo_unexecuted_blocks=1 00:11:45.932 00:11:45.932 ' 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:45.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.932 --rc genhtml_branch_coverage=1 00:11:45.932 --rc genhtml_function_coverage=1 00:11:45.932 --rc genhtml_legend=1 00:11:45.932 --rc geninfo_all_blocks=1 00:11:45.932 --rc geninfo_unexecuted_blocks=1 00:11:45.932 00:11:45.932 ' 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:45.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.932 --rc genhtml_branch_coverage=1 00:11:45.932 --rc genhtml_function_coverage=1 00:11:45.932 --rc genhtml_legend=1 00:11:45.932 --rc geninfo_all_blocks=1 00:11:45.932 --rc geninfo_unexecuted_blocks=1 00:11:45.932 00:11:45.932 ' 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:45.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:45.932 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:46.193 ************************************ 00:11:46.193 START TEST nvmf_example 00:11:46.193 ************************************ 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:46.193 * Looking for test storage... 00:11:46.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:46.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.193 --rc genhtml_branch_coverage=1 00:11:46.193 --rc genhtml_function_coverage=1 00:11:46.193 --rc genhtml_legend=1 00:11:46.193 --rc geninfo_all_blocks=1 00:11:46.193 --rc geninfo_unexecuted_blocks=1 00:11:46.193 00:11:46.193 ' 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:46.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.193 --rc genhtml_branch_coverage=1 00:11:46.193 --rc genhtml_function_coverage=1 00:11:46.193 --rc genhtml_legend=1 00:11:46.193 --rc geninfo_all_blocks=1 00:11:46.193 --rc geninfo_unexecuted_blocks=1 00:11:46.193 00:11:46.193 ' 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:46.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.193 --rc genhtml_branch_coverage=1 00:11:46.193 --rc genhtml_function_coverage=1 00:11:46.193 --rc genhtml_legend=1 00:11:46.193 --rc geninfo_all_blocks=1 00:11:46.193 --rc geninfo_unexecuted_blocks=1 00:11:46.193 00:11:46.193 ' 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:46.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.193 --rc genhtml_branch_coverage=1 00:11:46.193 --rc genhtml_function_coverage=1 00:11:46.193 --rc genhtml_legend=1 00:11:46.193 --rc geninfo_all_blocks=1 00:11:46.193 --rc geninfo_unexecuted_blocks=1 00:11:46.193 00:11:46.193 ' 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.193 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.194 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.194 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:46.454 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:46.455 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:46.455 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:46.455 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:46.455 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:46.455 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:46.455 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:46.455 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:46.455 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:46.455 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.455 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:46.455 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:46.455 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:46.455 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.455 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.455 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.455 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:46.455 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:46.455 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:46.455 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:54.594 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:54.594 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:54.594 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:54.595 Found net devices under 0000:31:00.0: cvl_0_0 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:54.595 Found net devices under 0000:31:00.1: cvl_0_1 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:54.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:11:54.595 00:11:54.595 --- 10.0.0.2 ping statistics --- 00:11:54.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.595 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:54.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:11:54.595 00:11:54.595 --- 10.0.0.1 ping statistics --- 00:11:54.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.595 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1571062 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1571062 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1571062 ']' 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:54.595 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.856 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:54.856 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:54.856 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:54.856 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:54.856 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:55.117 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:55.117 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.117 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:55.117 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.117 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:55.117 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.117 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:55.117 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.117 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:55.118 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:55.118 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.118 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:55.118 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.118 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:55.118 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:55.118 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.118 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:55.118 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.118 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:55.118 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.118 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:55.118 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.118 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:55.118 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:07.342 Initializing NVMe Controllers 00:12:07.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:07.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:07.342 Initialization complete. Launching workers. 00:12:07.342 ======================================================== 00:12:07.342 Latency(us) 00:12:07.342 Device Information : IOPS MiB/s Average min max 00:12:07.342 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18702.03 73.05 3421.93 608.99 16414.60 00:12:07.342 ======================================================== 00:12:07.342 Total : 18702.03 73.05 3421.93 608.99 16414.60 00:12:07.342 00:12:07.342 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:07.342 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:07.342 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:07.342 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:07.342 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:07.342 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:07.342 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:07.342 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:07.342 rmmod nvme_tcp 00:12:07.342 rmmod nvme_fabrics 00:12:07.342 rmmod nvme_keyring 00:12:07.342 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:07.342 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:07.342 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:07.342 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 1571062 ']' 00:12:07.342 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 1571062 00:12:07.342 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1571062 ']' 00:12:07.342 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1571062 00:12:07.342 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:12:07.342 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:07.342 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1571062 00:12:07.342 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:12:07.342 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:12:07.342 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1571062' 00:12:07.342 killing process with pid 1571062 00:12:07.342 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1571062 00:12:07.342 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1571062 00:12:07.342 nvmf threads initialize successfully 00:12:07.342 bdev subsystem init successfully 00:12:07.342 created a nvmf target service 00:12:07.342 create targets's poll groups done 00:12:07.342 all subsystems of target started 00:12:07.342 nvmf target is running 00:12:07.342 all subsystems of target stopped 00:12:07.342 destroy targets's poll groups done 00:12:07.342 destroyed the nvmf target service 00:12:07.342 bdev subsystem finish successfully 00:12:07.342 nvmf threads destroy successfully 00:12:07.342 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:07.342 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:07.342 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:07.342 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:07.342 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:12:07.342 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:12:07.342 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:07.342 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:07.342 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:07.342 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.342 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.342 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.600 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:07.600 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:07.600 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:07.600 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.600 00:12:07.600 real 0m21.624s 00:12:07.600 user 0m46.380s 00:12:07.600 sys 0m7.113s 00:12:07.600 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.600 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:07.600 ************************************ 00:12:07.600 END TEST nvmf_example 00:12:07.600 ************************************ 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:07.860 ************************************ 00:12:07.860 START TEST nvmf_filesystem 00:12:07.860 ************************************ 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:07.860 * Looking for test storage... 00:12:07.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:07.860 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.861 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.861 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:07.861 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:07.861 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.861 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:07.861 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.861 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:07.861 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:07.861 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.861 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:07.861 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.861 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:08.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.124 --rc genhtml_branch_coverage=1 00:12:08.124 --rc genhtml_function_coverage=1 00:12:08.124 --rc genhtml_legend=1 00:12:08.124 --rc geninfo_all_blocks=1 00:12:08.124 --rc geninfo_unexecuted_blocks=1 00:12:08.124 00:12:08.124 ' 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:08.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.124 --rc genhtml_branch_coverage=1 00:12:08.124 --rc genhtml_function_coverage=1 00:12:08.124 --rc genhtml_legend=1 00:12:08.124 --rc geninfo_all_blocks=1 00:12:08.124 --rc geninfo_unexecuted_blocks=1 00:12:08.124 00:12:08.124 ' 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:08.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.124 --rc genhtml_branch_coverage=1 00:12:08.124 --rc genhtml_function_coverage=1 00:12:08.124 --rc genhtml_legend=1 00:12:08.124 --rc geninfo_all_blocks=1 00:12:08.124 --rc geninfo_unexecuted_blocks=1 00:12:08.124 00:12:08.124 ' 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:08.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.124 --rc genhtml_branch_coverage=1 00:12:08.124 --rc genhtml_function_coverage=1 00:12:08.124 --rc genhtml_legend=1 00:12:08.124 --rc geninfo_all_blocks=1 00:12:08.124 --rc geninfo_unexecuted_blocks=1 00:12:08.124 00:12:08.124 ' 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:08.124 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:08.124 #define SPDK_CONFIG_H 00:12:08.124 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:08.124 #define SPDK_CONFIG_APPS 1 00:12:08.124 #define SPDK_CONFIG_ARCH native 00:12:08.124 #undef SPDK_CONFIG_ASAN 00:12:08.124 #undef SPDK_CONFIG_AVAHI 00:12:08.124 #undef SPDK_CONFIG_CET 00:12:08.124 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:08.125 #define SPDK_CONFIG_COVERAGE 1 00:12:08.125 #define SPDK_CONFIG_CROSS_PREFIX 00:12:08.125 #undef SPDK_CONFIG_CRYPTO 00:12:08.125 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:08.125 #undef SPDK_CONFIG_CUSTOMOCF 00:12:08.125 #undef SPDK_CONFIG_DAOS 00:12:08.125 #define SPDK_CONFIG_DAOS_DIR 00:12:08.125 #define SPDK_CONFIG_DEBUG 1 00:12:08.125 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:08.125 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:12:08.125 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:12:08.125 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:08.125 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:08.125 #undef SPDK_CONFIG_DPDK_UADK 00:12:08.125 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:08.125 #define SPDK_CONFIG_EXAMPLES 1 00:12:08.125 #undef SPDK_CONFIG_FC 00:12:08.125 #define SPDK_CONFIG_FC_PATH 00:12:08.125 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:08.125 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:08.125 #define SPDK_CONFIG_FSDEV 1 00:12:08.125 #undef SPDK_CONFIG_FUSE 00:12:08.125 #undef SPDK_CONFIG_FUZZER 00:12:08.125 #define SPDK_CONFIG_FUZZER_LIB 00:12:08.125 #undef SPDK_CONFIG_GOLANG 00:12:08.125 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:08.125 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:08.125 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:08.125 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:08.125 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:08.125 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:08.125 #undef SPDK_CONFIG_HAVE_LZ4 00:12:08.125 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:08.125 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:08.125 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:08.125 #define SPDK_CONFIG_IDXD 1 00:12:08.125 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:08.125 #undef SPDK_CONFIG_IPSEC_MB 00:12:08.125 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:08.125 #define SPDK_CONFIG_ISAL 1 00:12:08.125 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:08.125 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:08.125 #define SPDK_CONFIG_LIBDIR 00:12:08.125 #undef SPDK_CONFIG_LTO 00:12:08.125 #define SPDK_CONFIG_MAX_LCORES 128 00:12:08.125 #define SPDK_CONFIG_NVME_CUSE 1 00:12:08.125 #undef SPDK_CONFIG_OCF 00:12:08.125 #define SPDK_CONFIG_OCF_PATH 00:12:08.125 #define SPDK_CONFIG_OPENSSL_PATH 00:12:08.125 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:08.125 #define SPDK_CONFIG_PGO_DIR 00:12:08.125 #undef SPDK_CONFIG_PGO_USE 00:12:08.125 #define SPDK_CONFIG_PREFIX /usr/local 00:12:08.125 #undef SPDK_CONFIG_RAID5F 00:12:08.125 #undef SPDK_CONFIG_RBD 00:12:08.125 #define SPDK_CONFIG_RDMA 1 00:12:08.125 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:08.125 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:08.125 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:08.125 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:08.125 #define SPDK_CONFIG_SHARED 1 00:12:08.125 #undef SPDK_CONFIG_SMA 00:12:08.125 #define SPDK_CONFIG_TESTS 1 00:12:08.125 #undef SPDK_CONFIG_TSAN 00:12:08.125 #define SPDK_CONFIG_UBLK 1 00:12:08.125 #define SPDK_CONFIG_UBSAN 1 00:12:08.125 #undef SPDK_CONFIG_UNIT_TESTS 00:12:08.125 #undef SPDK_CONFIG_URING 00:12:08.125 #define SPDK_CONFIG_URING_PATH 00:12:08.125 #undef SPDK_CONFIG_URING_ZNS 00:12:08.125 #undef SPDK_CONFIG_USDT 00:12:08.125 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:08.125 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:08.125 #define SPDK_CONFIG_VFIO_USER 1 00:12:08.125 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:08.125 #define SPDK_CONFIG_VHOST 1 00:12:08.125 #define SPDK_CONFIG_VIRTIO 1 00:12:08.125 #undef SPDK_CONFIG_VTUNE 00:12:08.125 #define SPDK_CONFIG_VTUNE_DIR 00:12:08.125 #define SPDK_CONFIG_WERROR 1 00:12:08.125 #define SPDK_CONFIG_WPDK_DIR 00:12:08.125 #undef SPDK_CONFIG_XNVME 00:12:08.125 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:08.125 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : main 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1573843 ]] 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1573843 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.EABVpz 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.EABVpz/tests/target /tmp/spdk.EABVpz 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:12:08.126 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=156295168 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5128134656 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=121257672704 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356537856 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=8098865152 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668237824 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678268928 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847894016 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871310848 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23416832 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=175104 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=328704 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677818368 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678268928 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=450560 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:12:08.127 * Looking for test storage... 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=121257672704 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=10313457664 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:08.127 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:08.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.388 --rc genhtml_branch_coverage=1 00:12:08.388 --rc genhtml_function_coverage=1 00:12:08.388 --rc genhtml_legend=1 00:12:08.388 --rc geninfo_all_blocks=1 00:12:08.388 --rc geninfo_unexecuted_blocks=1 00:12:08.388 00:12:08.388 ' 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:08.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.388 --rc genhtml_branch_coverage=1 00:12:08.388 --rc genhtml_function_coverage=1 00:12:08.388 --rc genhtml_legend=1 00:12:08.388 --rc geninfo_all_blocks=1 00:12:08.388 --rc geninfo_unexecuted_blocks=1 00:12:08.388 00:12:08.388 ' 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:08.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.388 --rc genhtml_branch_coverage=1 00:12:08.388 --rc genhtml_function_coverage=1 00:12:08.388 --rc genhtml_legend=1 00:12:08.388 --rc geninfo_all_blocks=1 00:12:08.388 --rc geninfo_unexecuted_blocks=1 00:12:08.388 00:12:08.388 ' 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:08.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.388 --rc genhtml_branch_coverage=1 00:12:08.388 --rc genhtml_function_coverage=1 00:12:08.388 --rc genhtml_legend=1 00:12:08.388 --rc geninfo_all_blocks=1 00:12:08.388 --rc geninfo_unexecuted_blocks=1 00:12:08.388 00:12:08.388 ' 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.388 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:08.389 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:16.529 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:16.529 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.529 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:16.529 Found net devices under 0000:31:00.0: cvl_0_0 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:16.530 Found net devices under 0000:31:00.1: cvl_0_1 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:16.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:12:16.530 00:12:16.530 --- 10.0.0.2 ping statistics --- 00:12:16.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.530 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:16.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:12:16.530 00:12:16.530 --- 10.0.0.1 ping statistics --- 00:12:16.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.530 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:16.530 ************************************ 00:12:16.530 START TEST nvmf_filesystem_no_in_capsule 00:12:16.530 ************************************ 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1577864 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1577864 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1577864 ']' 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:16.530 14:08:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.530 [2024-10-13 14:08:19.637228] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:12:16.530 [2024-10-13 14:08:19.637278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.530 [2024-10-13 14:08:19.774795] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:16.530 [2024-10-13 14:08:19.821938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.530 [2024-10-13 14:08:19.843768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.530 [2024-10-13 14:08:19.843808] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.530 [2024-10-13 14:08:19.843816] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.530 [2024-10-13 14:08:19.843823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.530 [2024-10-13 14:08:19.843829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.530 [2024-10-13 14:08:19.845615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.530 [2024-10-13 14:08:19.845768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.530 [2024-10-13 14:08:19.845922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.530 [2024-10-13 14:08:19.845922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.790 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:16.790 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:16.790 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:16.790 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:16.790 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.790 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.790 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:16.790 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:16.790 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.790 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.790 [2024-10-13 14:08:20.476109] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.790 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.790 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:16.790 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.790 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.051 Malloc1 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.051 [2024-10-13 14:08:20.607832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.051 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:17.051 { 00:12:17.051 "name": "Malloc1", 00:12:17.051 "aliases": [ 00:12:17.051 "39ce4ef6-6b8b-4e57-83ed-3dcacb8c4f0d" 00:12:17.051 ], 00:12:17.051 "product_name": "Malloc disk", 00:12:17.051 "block_size": 512, 00:12:17.051 "num_blocks": 1048576, 00:12:17.051 "uuid": "39ce4ef6-6b8b-4e57-83ed-3dcacb8c4f0d", 00:12:17.051 "assigned_rate_limits": { 00:12:17.051 "rw_ios_per_sec": 0, 00:12:17.051 "rw_mbytes_per_sec": 0, 00:12:17.051 "r_mbytes_per_sec": 0, 00:12:17.051 "w_mbytes_per_sec": 0 00:12:17.051 }, 00:12:17.051 "claimed": true, 00:12:17.051 "claim_type": "exclusive_write", 00:12:17.051 "zoned": false, 00:12:17.051 "supported_io_types": { 00:12:17.051 "read": true, 00:12:17.051 "write": true, 00:12:17.051 "unmap": true, 00:12:17.051 "flush": true, 00:12:17.051 "reset": true, 00:12:17.051 "nvme_admin": false, 00:12:17.051 "nvme_io": false, 00:12:17.051 "nvme_io_md": false, 00:12:17.051 "write_zeroes": true, 00:12:17.051 "zcopy": true, 00:12:17.051 "get_zone_info": false, 00:12:17.051 "zone_management": false, 00:12:17.051 "zone_append": false, 00:12:17.051 "compare": false, 00:12:17.051 "compare_and_write": false, 00:12:17.051 "abort": true, 00:12:17.051 "seek_hole": false, 00:12:17.052 "seek_data": false, 00:12:17.052 "copy": true, 00:12:17.052 "nvme_iov_md": false 00:12:17.052 }, 00:12:17.052 "memory_domains": [ 00:12:17.052 { 00:12:17.052 "dma_device_id": "system", 00:12:17.052 "dma_device_type": 1 00:12:17.052 }, 00:12:17.052 { 00:12:17.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.052 "dma_device_type": 2 00:12:17.052 } 00:12:17.052 ], 00:12:17.052 "driver_specific": {} 00:12:17.052 } 00:12:17.052 ]' 00:12:17.052 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:17.052 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:17.052 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:17.052 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:17.052 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:17.052 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:17.052 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:17.052 14:08:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.965 14:08:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:18.965 14:08:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:18.965 14:08:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.965 14:08:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:18.965 14:08:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:20.877 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:20.877 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:20.877 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.877 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:20.877 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.877 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:20.877 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:20.877 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:20.877 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:20.877 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:20.877 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:20.877 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:20.877 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:20.877 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:20.877 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:20.877 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:20.877 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:20.877 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:21.137 14:08:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:22.077 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:22.077 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:22.077 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:22.077 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:22.077 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.077 ************************************ 00:12:22.077 START TEST filesystem_ext4 00:12:22.077 ************************************ 00:12:22.077 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:22.077 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:22.077 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:22.077 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:22.077 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:22.077 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:22.077 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:22.077 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:22.077 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:22.077 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:22.077 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:22.077 mke2fs 1.47.0 (5-Feb-2023) 00:12:22.077 Discarding device blocks: 0/522240 done 00:12:22.077 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:22.077 Filesystem UUID: 4681748b-6dcf-44c5-9469-30f4cf5fa3f3 00:12:22.077 Superblock backups stored on blocks: 00:12:22.077 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:22.077 00:12:22.077 Allocating group tables: 0/64 done 00:12:22.077 Writing inode tables: 0/64 done 00:12:22.649 Creating journal (8192 blocks): done 00:12:24.859 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:12:24.859 00:12:24.859 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:24.859 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:30.141 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:30.403 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:30.403 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:30.403 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:30.403 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:30.403 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:30.403 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1577864 00:12:30.403 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:30.403 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:30.403 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:30.403 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:30.403 00:12:30.403 real 0m8.306s 00:12:30.403 user 0m0.029s 00:12:30.403 sys 0m0.080s 00:12:30.403 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:30.403 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:30.403 ************************************ 00:12:30.403 END TEST filesystem_ext4 00:12:30.403 ************************************ 00:12:30.403 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:30.403 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:30.403 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:30.403 14:08:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.403 ************************************ 00:12:30.403 START TEST filesystem_btrfs 00:12:30.403 ************************************ 00:12:30.403 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:30.403 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:30.403 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:30.403 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:30.403 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:30.403 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:30.403 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:30.403 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:30.403 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:30.403 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:30.403 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:30.664 btrfs-progs v6.8.1 00:12:30.664 See https://btrfs.readthedocs.io for more information. 00:12:30.664 00:12:30.664 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:30.664 NOTE: several default settings have changed in version 5.15, please make sure 00:12:30.664 this does not affect your deployments: 00:12:30.664 - DUP for metadata (-m dup) 00:12:30.664 - enabled no-holes (-O no-holes) 00:12:30.664 - enabled free-space-tree (-R free-space-tree) 00:12:30.664 00:12:30.664 Label: (null) 00:12:30.664 UUID: 4581d69a-e5d5-4b1a-94b6-d5b01eee9869 00:12:30.664 Node size: 16384 00:12:30.664 Sector size: 4096 (CPU page size: 4096) 00:12:30.664 Filesystem size: 510.00MiB 00:12:30.664 Block group profiles: 00:12:30.664 Data: single 8.00MiB 00:12:30.664 Metadata: DUP 32.00MiB 00:12:30.664 System: DUP 8.00MiB 00:12:30.664 SSD detected: yes 00:12:30.664 Zoned device: no 00:12:30.664 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:30.664 Checksum: crc32c 00:12:30.664 Number of devices: 1 00:12:30.664 Devices: 00:12:30.664 ID SIZE PATH 00:12:30.664 1 510.00MiB /dev/nvme0n1p1 00:12:30.664 00:12:30.664 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:30.664 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:30.924 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1577864 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:31.185 00:12:31.185 real 0m0.671s 00:12:31.185 user 0m0.034s 00:12:31.185 sys 0m0.117s 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:31.185 ************************************ 00:12:31.185 END TEST filesystem_btrfs 00:12:31.185 ************************************ 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:31.185 ************************************ 00:12:31.185 START TEST filesystem_xfs 00:12:31.185 ************************************ 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:31.185 14:08:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:31.185 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:31.185 = sectsz=512 attr=2, projid32bit=1 00:12:31.185 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:31.185 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:31.185 data = bsize=4096 blocks=130560, imaxpct=25 00:12:31.185 = sunit=0 swidth=0 blks 00:12:31.185 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:31.185 log =internal log bsize=4096 blocks=16384, version=2 00:12:31.185 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:31.185 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:32.127 Discarding blocks...Done. 00:12:32.127 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:32.127 14:08:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:34.669 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:34.669 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:34.669 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:34.669 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:34.669 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:34.669 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:34.669 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1577864 00:12:34.670 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:34.670 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:34.670 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:34.670 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:34.670 00:12:34.670 real 0m3.486s 00:12:34.670 user 0m0.035s 00:12:34.670 sys 0m0.071s 00:12:34.670 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.670 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:34.670 ************************************ 00:12:34.670 END TEST filesystem_xfs 00:12:34.670 ************************************ 00:12:34.670 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1577864 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1577864 ']' 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1577864 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:34.930 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1577864 00:12:35.190 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:35.190 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:35.190 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1577864' 00:12:35.190 killing process with pid 1577864 00:12:35.190 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1577864 00:12:35.190 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1577864 00:12:35.190 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:35.190 00:12:35.190 real 0m19.282s 00:12:35.190 user 1m15.926s 00:12:35.190 sys 0m1.428s 00:12:35.190 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:35.190 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:35.190 ************************************ 00:12:35.190 END TEST nvmf_filesystem_no_in_capsule 00:12:35.190 ************************************ 00:12:35.190 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:35.190 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:35.190 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:35.190 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:35.450 ************************************ 00:12:35.450 START TEST nvmf_filesystem_in_capsule 00:12:35.450 ************************************ 00:12:35.450 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:35.450 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:35.450 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:35.450 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:35.450 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:35.450 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:35.450 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1581789 00:12:35.450 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1581789 00:12:35.450 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1581789 ']' 00:12:35.450 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.450 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.450 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:35.450 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.450 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:35.450 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:35.450 [2024-10-13 14:08:38.993808] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:12:35.450 [2024-10-13 14:08:38.993860] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.450 [2024-10-13 14:08:39.134298] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:35.710 [2024-10-13 14:08:39.182690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.711 [2024-10-13 14:08:39.206771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.711 [2024-10-13 14:08:39.206809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.711 [2024-10-13 14:08:39.206815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.711 [2024-10-13 14:08:39.206821] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.711 [2024-10-13 14:08:39.206825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.711 [2024-10-13 14:08:39.208497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.711 [2024-10-13 14:08:39.208655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.711 [2024-10-13 14:08:39.208804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.711 [2024-10-13 14:08:39.208806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:36.281 [2024-10-13 14:08:39.849611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:36.281 Malloc1 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.281 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.282 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:36.282 [2024-10-13 14:08:39.981452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.282 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.282 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:36.282 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:36.542 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:36.542 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:36.542 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:36.542 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:36.542 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.542 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:36.542 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.542 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:36.542 { 00:12:36.542 "name": "Malloc1", 00:12:36.542 "aliases": [ 00:12:36.542 "942d8604-9280-41b2-99b6-6b4dfd3dc301" 00:12:36.542 ], 00:12:36.542 "product_name": "Malloc disk", 00:12:36.542 "block_size": 512, 00:12:36.542 "num_blocks": 1048576, 00:12:36.542 "uuid": "942d8604-9280-41b2-99b6-6b4dfd3dc301", 00:12:36.542 "assigned_rate_limits": { 00:12:36.542 "rw_ios_per_sec": 0, 00:12:36.542 "rw_mbytes_per_sec": 0, 00:12:36.542 "r_mbytes_per_sec": 0, 00:12:36.542 "w_mbytes_per_sec": 0 00:12:36.542 }, 00:12:36.542 "claimed": true, 00:12:36.542 "claim_type": "exclusive_write", 00:12:36.542 "zoned": false, 00:12:36.543 "supported_io_types": { 00:12:36.543 "read": true, 00:12:36.543 "write": true, 00:12:36.543 "unmap": true, 00:12:36.543 "flush": true, 00:12:36.543 "reset": true, 00:12:36.543 "nvme_admin": false, 00:12:36.543 "nvme_io": false, 00:12:36.543 "nvme_io_md": false, 00:12:36.543 "write_zeroes": true, 00:12:36.543 "zcopy": true, 00:12:36.543 "get_zone_info": false, 00:12:36.543 "zone_management": false, 00:12:36.543 "zone_append": false, 00:12:36.543 "compare": false, 00:12:36.543 "compare_and_write": false, 00:12:36.543 "abort": true, 00:12:36.543 "seek_hole": false, 00:12:36.543 "seek_data": false, 00:12:36.543 "copy": true, 00:12:36.543 "nvme_iov_md": false 00:12:36.543 }, 00:12:36.543 "memory_domains": [ 00:12:36.543 { 00:12:36.543 "dma_device_id": "system", 00:12:36.543 "dma_device_type": 1 00:12:36.543 }, 00:12:36.543 { 00:12:36.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.543 "dma_device_type": 2 00:12:36.543 } 00:12:36.543 ], 00:12:36.543 "driver_specific": {} 00:12:36.543 } 00:12:36.543 ]' 00:12:36.543 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:36.543 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:36.543 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:36.543 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:36.543 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:36.543 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:36.543 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:36.543 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.453 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.453 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:38.453 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.453 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:38.453 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:40.363 14:08:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:40.363 14:08:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:40.363 14:08:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.363 14:08:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:40.363 14:08:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.363 14:08:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:40.363 14:08:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:40.363 14:08:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:40.363 14:08:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:40.363 14:08:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:40.363 14:08:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:40.363 14:08:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:40.363 14:08:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:40.363 14:08:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:40.363 14:08:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:40.363 14:08:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:40.363 14:08:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:40.363 14:08:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:40.933 14:08:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:41.875 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:41.875 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:41.875 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:41.875 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:41.875 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.875 ************************************ 00:12:41.875 START TEST filesystem_in_capsule_ext4 00:12:41.875 ************************************ 00:12:41.875 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:41.875 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:41.875 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:41.875 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:41.875 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:41.875 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:41.875 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:41.875 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:41.875 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:41.875 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:41.875 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:41.875 mke2fs 1.47.0 (5-Feb-2023) 00:12:41.875 Discarding device blocks: 0/522240 done 00:12:41.875 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:41.875 Filesystem UUID: a0feb3c5-c2b0-4249-963f-5b0ca6768f5f 00:12:41.875 Superblock backups stored on blocks: 00:12:41.875 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:41.875 00:12:41.875 Allocating group tables: 0/64 done 00:12:41.875 Writing inode tables: 0/64 done 00:12:42.135 Creating journal (8192 blocks): done 00:12:42.395 Writing superblocks and filesystem accounting information: 0/64 done 00:12:42.395 00:12:42.395 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:42.395 14:08:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:47.677 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:47.678 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1581789 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:47.939 00:12:47.939 real 0m6.028s 00:12:47.939 user 0m0.023s 00:12:47.939 sys 0m0.084s 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:47.939 ************************************ 00:12:47.939 END TEST filesystem_in_capsule_ext4 00:12:47.939 ************************************ 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:47.939 ************************************ 00:12:47.939 START TEST filesystem_in_capsule_btrfs 00:12:47.939 ************************************ 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:47.939 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:48.511 btrfs-progs v6.8.1 00:12:48.511 See https://btrfs.readthedocs.io for more information. 00:12:48.511 00:12:48.511 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:48.511 NOTE: several default settings have changed in version 5.15, please make sure 00:12:48.511 this does not affect your deployments: 00:12:48.511 - DUP for metadata (-m dup) 00:12:48.511 - enabled no-holes (-O no-holes) 00:12:48.511 - enabled free-space-tree (-R free-space-tree) 00:12:48.511 00:12:48.511 Label: (null) 00:12:48.511 UUID: 56ba5888-7cd3-48b1-a5d8-62f7df788160 00:12:48.511 Node size: 16384 00:12:48.511 Sector size: 4096 (CPU page size: 4096) 00:12:48.511 Filesystem size: 510.00MiB 00:12:48.511 Block group profiles: 00:12:48.511 Data: single 8.00MiB 00:12:48.511 Metadata: DUP 32.00MiB 00:12:48.511 System: DUP 8.00MiB 00:12:48.511 SSD detected: yes 00:12:48.511 Zoned device: no 00:12:48.511 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:48.511 Checksum: crc32c 00:12:48.511 Number of devices: 1 00:12:48.511 Devices: 00:12:48.511 ID SIZE PATH 00:12:48.511 1 510.00MiB /dev/nvme0n1p1 00:12:48.511 00:12:48.511 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:48.511 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:48.511 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:48.511 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:48.511 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:48.511 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:48.511 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:48.511 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:48.771 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1581789 00:12:48.771 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:48.771 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:48.771 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:48.771 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:48.772 00:12:48.772 real 0m0.729s 00:12:48.772 user 0m0.027s 00:12:48.772 sys 0m0.119s 00:12:48.772 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:48.772 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:48.772 ************************************ 00:12:48.772 END TEST filesystem_in_capsule_btrfs 00:12:48.772 ************************************ 00:12:48.772 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:48.772 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:48.772 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:48.772 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:48.772 ************************************ 00:12:48.772 START TEST filesystem_in_capsule_xfs 00:12:48.772 ************************************ 00:12:48.772 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:48.772 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:48.772 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:48.772 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:48.772 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:48.772 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:48.772 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:48.772 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:48.772 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:48.772 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:48.772 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:48.772 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:48.772 = sectsz=512 attr=2, projid32bit=1 00:12:48.772 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:48.772 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:48.772 data = bsize=4096 blocks=130560, imaxpct=25 00:12:48.772 = sunit=0 swidth=0 blks 00:12:48.772 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:48.772 log =internal log bsize=4096 blocks=16384, version=2 00:12:48.772 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:48.772 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:49.713 Discarding blocks...Done. 00:12:49.713 14:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:49.713 14:08:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:52.258 14:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:52.258 14:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:52.258 14:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:52.258 14:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:52.258 14:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:52.258 14:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:52.258 14:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1581789 00:12:52.258 14:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:52.258 14:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:52.258 14:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:52.258 14:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:52.258 00:12:52.258 real 0m3.619s 00:12:52.258 user 0m0.023s 00:12:52.258 sys 0m0.085s 00:12:52.258 14:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:52.258 14:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:52.258 ************************************ 00:12:52.258 END TEST filesystem_in_capsule_xfs 00:12:52.258 ************************************ 00:12:52.520 14:08:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1581789 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1581789 ']' 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1581789 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:52.520 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1581789 00:12:52.780 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:52.780 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:52.780 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1581789' 00:12:52.780 killing process with pid 1581789 00:12:52.780 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1581789 00:12:52.780 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1581789 00:12:52.780 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:52.780 00:12:52.780 real 0m17.518s 00:12:52.780 user 1m8.888s 00:12:52.780 sys 0m1.431s 00:12:52.780 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:52.780 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:52.780 ************************************ 00:12:52.780 END TEST nvmf_filesystem_in_capsule 00:12:52.780 ************************************ 00:12:53.041 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:53.041 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:53.041 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:53.041 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:53.041 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:53.041 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:53.041 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:53.042 rmmod nvme_tcp 00:12:53.042 rmmod nvme_fabrics 00:12:53.042 rmmod nvme_keyring 00:12:53.042 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:53.042 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:53.042 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:53.042 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:12:53.042 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:53.042 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:53.042 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:53.042 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:53.042 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:12:53.042 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:53.042 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:12:53.042 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:53.042 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:53.042 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.042 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.042 14:08:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.955 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:54.955 00:12:54.955 real 0m47.268s 00:12:54.955 user 2m27.333s 00:12:54.955 sys 0m8.759s 00:12:54.955 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.955 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:54.955 ************************************ 00:12:54.955 END TEST nvmf_filesystem 00:12:54.955 ************************************ 00:12:55.216 14:08:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:55.216 14:08:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:55.216 14:08:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:55.216 14:08:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.216 ************************************ 00:12:55.216 START TEST nvmf_target_discovery 00:12:55.216 ************************************ 00:12:55.216 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:55.216 * Looking for test storage... 00:12:55.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.216 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:55.216 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:12:55.216 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:55.216 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:55.216 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.216 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.216 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.216 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.216 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.216 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.216 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.217 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.217 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.217 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.217 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.217 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:55.217 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:55.217 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.217 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.217 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:55.217 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:55.217 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.217 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:55.217 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.217 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:55.217 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:55.217 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:55.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.479 --rc genhtml_branch_coverage=1 00:12:55.479 --rc genhtml_function_coverage=1 00:12:55.479 --rc genhtml_legend=1 00:12:55.479 --rc geninfo_all_blocks=1 00:12:55.479 --rc geninfo_unexecuted_blocks=1 00:12:55.479 00:12:55.479 ' 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:55.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.479 --rc genhtml_branch_coverage=1 00:12:55.479 --rc genhtml_function_coverage=1 00:12:55.479 --rc genhtml_legend=1 00:12:55.479 --rc geninfo_all_blocks=1 00:12:55.479 --rc geninfo_unexecuted_blocks=1 00:12:55.479 00:12:55.479 ' 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:55.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.479 --rc genhtml_branch_coverage=1 00:12:55.479 --rc genhtml_function_coverage=1 00:12:55.479 --rc genhtml_legend=1 00:12:55.479 --rc geninfo_all_blocks=1 00:12:55.479 --rc geninfo_unexecuted_blocks=1 00:12:55.479 00:12:55.479 ' 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:55.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.479 --rc genhtml_branch_coverage=1 00:12:55.479 --rc genhtml_function_coverage=1 00:12:55.479 --rc genhtml_legend=1 00:12:55.479 --rc geninfo_all_blocks=1 00:12:55.479 --rc geninfo_unexecuted_blocks=1 00:12:55.479 00:12:55.479 ' 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:55.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:55.479 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:55.480 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.480 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:55.480 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:55.480 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:55.480 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.480 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.480 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.480 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:55.480 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:55.480 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:55.480 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.747 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:03.748 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:03.748 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:03.748 Found net devices under 0000:31:00.0: cvl_0_0 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:03.748 Found net devices under 0000:31:00.1: cvl_0_1 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:03.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:13:03.748 00:13:03.748 --- 10.0.0.2 ping statistics --- 00:13:03.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.748 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:13:03.748 00:13:03.748 --- 10.0.0.1 ping statistics --- 00:13:03.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.748 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=1589857 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 1589857 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1589857 ']' 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:03.748 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:03.748 [2024-10-13 14:09:06.613386] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:13:03.748 [2024-10-13 14:09:06.613448] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.748 [2024-10-13 14:09:06.755849] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:03.748 [2024-10-13 14:09:06.802921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.748 [2024-10-13 14:09:06.831233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.749 [2024-10-13 14:09:06.831274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.749 [2024-10-13 14:09:06.831283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.749 [2024-10-13 14:09:06.831290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.749 [2024-10-13 14:09:06.831296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.749 [2024-10-13 14:09:06.833611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.749 [2024-10-13 14:09:06.833772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.749 [2024-10-13 14:09:06.833925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.749 [2024-10-13 14:09:06.833925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.749 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:03.749 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:13:03.749 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:03.749 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:03.749 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.010 [2024-10-13 14:09:07.500883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.010 Null1 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.010 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.011 [2024-10-13 14:09:07.561220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.011 Null2 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.011 Null3 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.011 Null4 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.011 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.273 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.273 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:04.273 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.273 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.273 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.273 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:13:04.273 00:13:04.273 Discovery Log Number of Records 6, Generation counter 6 00:13:04.273 =====Discovery Log Entry 0====== 00:13:04.273 trtype: tcp 00:13:04.273 adrfam: ipv4 00:13:04.273 subtype: current discovery subsystem 00:13:04.273 treq: not required 00:13:04.273 portid: 0 00:13:04.273 trsvcid: 4420 00:13:04.273 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:04.273 traddr: 10.0.0.2 00:13:04.273 eflags: explicit discovery connections, duplicate discovery information 00:13:04.273 sectype: none 00:13:04.273 =====Discovery Log Entry 1====== 00:13:04.273 trtype: tcp 00:13:04.273 adrfam: ipv4 00:13:04.273 subtype: nvme subsystem 00:13:04.273 treq: not required 00:13:04.273 portid: 0 00:13:04.273 trsvcid: 4420 00:13:04.273 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:04.273 traddr: 10.0.0.2 00:13:04.273 eflags: none 00:13:04.273 sectype: none 00:13:04.273 =====Discovery Log Entry 2====== 00:13:04.273 trtype: tcp 00:13:04.273 adrfam: ipv4 00:13:04.273 subtype: nvme subsystem 00:13:04.273 treq: not required 00:13:04.273 portid: 0 00:13:04.273 trsvcid: 4420 00:13:04.273 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:04.273 traddr: 10.0.0.2 00:13:04.273 eflags: none 00:13:04.273 sectype: none 00:13:04.273 =====Discovery Log Entry 3====== 00:13:04.273 trtype: tcp 00:13:04.273 adrfam: ipv4 00:13:04.273 subtype: nvme subsystem 00:13:04.273 treq: not required 00:13:04.273 portid: 0 00:13:04.273 trsvcid: 4420 00:13:04.273 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:04.273 traddr: 10.0.0.2 00:13:04.273 eflags: none 00:13:04.273 sectype: none 00:13:04.273 =====Discovery Log Entry 4====== 00:13:04.273 trtype: tcp 00:13:04.273 adrfam: ipv4 00:13:04.273 subtype: nvme subsystem 00:13:04.273 treq: not required 00:13:04.273 portid: 0 00:13:04.273 trsvcid: 4420 00:13:04.273 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:04.273 traddr: 10.0.0.2 00:13:04.273 eflags: none 00:13:04.273 sectype: none 00:13:04.273 =====Discovery Log Entry 5====== 00:13:04.273 trtype: tcp 00:13:04.273 adrfam: ipv4 00:13:04.273 subtype: discovery subsystem referral 00:13:04.273 treq: not required 00:13:04.273 portid: 0 00:13:04.273 trsvcid: 4430 00:13:04.273 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:04.273 traddr: 10.0.0.2 00:13:04.273 eflags: none 00:13:04.273 sectype: none 00:13:04.273 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:04.273 Perform nvmf subsystem discovery via RPC 00:13:04.273 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:04.273 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.273 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.273 [ 00:13:04.273 { 00:13:04.273 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:04.273 "subtype": "Discovery", 00:13:04.273 "listen_addresses": [ 00:13:04.273 { 00:13:04.273 "trtype": "TCP", 00:13:04.273 "adrfam": "IPv4", 00:13:04.273 "traddr": "10.0.0.2", 00:13:04.273 "trsvcid": "4420" 00:13:04.273 } 00:13:04.273 ], 00:13:04.273 "allow_any_host": true, 00:13:04.273 "hosts": [] 00:13:04.273 }, 00:13:04.273 { 00:13:04.273 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:04.273 "subtype": "NVMe", 00:13:04.273 "listen_addresses": [ 00:13:04.273 { 00:13:04.273 "trtype": "TCP", 00:13:04.273 "adrfam": "IPv4", 00:13:04.273 "traddr": "10.0.0.2", 00:13:04.273 "trsvcid": "4420" 00:13:04.273 } 00:13:04.273 ], 00:13:04.273 "allow_any_host": true, 00:13:04.273 "hosts": [], 00:13:04.273 "serial_number": "SPDK00000000000001", 00:13:04.273 "model_number": "SPDK bdev Controller", 00:13:04.273 "max_namespaces": 32, 00:13:04.273 "min_cntlid": 1, 00:13:04.273 "max_cntlid": 65519, 00:13:04.273 "namespaces": [ 00:13:04.273 { 00:13:04.273 "nsid": 1, 00:13:04.273 "bdev_name": "Null1", 00:13:04.273 "name": "Null1", 00:13:04.273 "nguid": "22503542DD8C4E05852A66EAC7AFD5D5", 00:13:04.273 "uuid": "22503542-dd8c-4e05-852a-66eac7afd5d5" 00:13:04.273 } 00:13:04.273 ] 00:13:04.273 }, 00:13:04.273 { 00:13:04.273 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:04.273 "subtype": "NVMe", 00:13:04.273 "listen_addresses": [ 00:13:04.273 { 00:13:04.273 "trtype": "TCP", 00:13:04.273 "adrfam": "IPv4", 00:13:04.273 "traddr": "10.0.0.2", 00:13:04.273 "trsvcid": "4420" 00:13:04.273 } 00:13:04.273 ], 00:13:04.273 "allow_any_host": true, 00:13:04.273 "hosts": [], 00:13:04.273 "serial_number": "SPDK00000000000002", 00:13:04.273 "model_number": "SPDK bdev Controller", 00:13:04.273 "max_namespaces": 32, 00:13:04.273 "min_cntlid": 1, 00:13:04.273 "max_cntlid": 65519, 00:13:04.273 "namespaces": [ 00:13:04.273 { 00:13:04.273 "nsid": 1, 00:13:04.273 "bdev_name": "Null2", 00:13:04.273 "name": "Null2", 00:13:04.273 "nguid": "5577C74B11534FCF94075EA35B98A9AB", 00:13:04.273 "uuid": "5577c74b-1153-4fcf-9407-5ea35b98a9ab" 00:13:04.273 } 00:13:04.273 ] 00:13:04.273 }, 00:13:04.273 { 00:13:04.273 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:04.273 "subtype": "NVMe", 00:13:04.273 "listen_addresses": [ 00:13:04.273 { 00:13:04.273 "trtype": "TCP", 00:13:04.273 "adrfam": "IPv4", 00:13:04.273 "traddr": "10.0.0.2", 00:13:04.273 "trsvcid": "4420" 00:13:04.273 } 00:13:04.273 ], 00:13:04.273 "allow_any_host": true, 00:13:04.273 "hosts": [], 00:13:04.273 "serial_number": "SPDK00000000000003", 00:13:04.273 "model_number": "SPDK bdev Controller", 00:13:04.273 "max_namespaces": 32, 00:13:04.273 "min_cntlid": 1, 00:13:04.273 "max_cntlid": 65519, 00:13:04.273 "namespaces": [ 00:13:04.273 { 00:13:04.273 "nsid": 1, 00:13:04.273 "bdev_name": "Null3", 00:13:04.273 "name": "Null3", 00:13:04.273 "nguid": "AC9B39DFB5E644E085F451070D1521E7", 00:13:04.273 "uuid": "ac9b39df-b5e6-44e0-85f4-51070d1521e7" 00:13:04.273 } 00:13:04.273 ] 00:13:04.273 }, 00:13:04.273 { 00:13:04.273 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:04.274 "subtype": "NVMe", 00:13:04.274 "listen_addresses": [ 00:13:04.274 { 00:13:04.274 "trtype": "TCP", 00:13:04.274 "adrfam": "IPv4", 00:13:04.274 "traddr": "10.0.0.2", 00:13:04.274 "trsvcid": "4420" 00:13:04.274 } 00:13:04.274 ], 00:13:04.274 "allow_any_host": true, 00:13:04.274 "hosts": [], 00:13:04.274 "serial_number": "SPDK00000000000004", 00:13:04.274 "model_number": "SPDK bdev Controller", 00:13:04.274 "max_namespaces": 32, 00:13:04.274 "min_cntlid": 1, 00:13:04.274 "max_cntlid": 65519, 00:13:04.274 "namespaces": [ 00:13:04.274 { 00:13:04.274 "nsid": 1, 00:13:04.274 "bdev_name": "Null4", 00:13:04.274 "name": "Null4", 00:13:04.274 "nguid": "23C9B65C60354D65A49392540997F81D", 00:13:04.274 "uuid": "23c9b65c-6035-4d65-a493-92540997f81d" 00:13:04.274 } 00:13:04.274 ] 00:13:04.274 } 00:13:04.274 ] 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.274 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.535 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.535 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:04.535 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.535 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:04.535 rmmod nvme_tcp 00:13:04.535 rmmod nvme_fabrics 00:13:04.535 rmmod nvme_keyring 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 1589857 ']' 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 1589857 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1589857 ']' 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1589857 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1589857 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1589857' 00:13:04.535 killing process with pid 1589857 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1589857 00:13:04.535 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1589857 00:13:04.796 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:04.796 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:04.796 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:04.796 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:04.796 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:13:04.796 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:04.796 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:13:04.796 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:04.796 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:04.796 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.796 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.796 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:07.343 00:13:07.343 real 0m11.723s 00:13:07.343 user 0m8.566s 00:13:07.343 sys 0m6.091s 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:07.343 ************************************ 00:13:07.343 END TEST nvmf_target_discovery 00:13:07.343 ************************************ 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:07.343 ************************************ 00:13:07.343 START TEST nvmf_referrals 00:13:07.343 ************************************ 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:07.343 * Looking for test storage... 00:13:07.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:07.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.343 --rc genhtml_branch_coverage=1 00:13:07.343 --rc genhtml_function_coverage=1 00:13:07.343 --rc genhtml_legend=1 00:13:07.343 --rc geninfo_all_blocks=1 00:13:07.343 --rc geninfo_unexecuted_blocks=1 00:13:07.343 00:13:07.343 ' 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:07.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.343 --rc genhtml_branch_coverage=1 00:13:07.343 --rc genhtml_function_coverage=1 00:13:07.343 --rc genhtml_legend=1 00:13:07.343 --rc geninfo_all_blocks=1 00:13:07.343 --rc geninfo_unexecuted_blocks=1 00:13:07.343 00:13:07.343 ' 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:07.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.343 --rc genhtml_branch_coverage=1 00:13:07.343 --rc genhtml_function_coverage=1 00:13:07.343 --rc genhtml_legend=1 00:13:07.343 --rc geninfo_all_blocks=1 00:13:07.343 --rc geninfo_unexecuted_blocks=1 00:13:07.343 00:13:07.343 ' 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:07.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.343 --rc genhtml_branch_coverage=1 00:13:07.343 --rc genhtml_function_coverage=1 00:13:07.343 --rc genhtml_legend=1 00:13:07.343 --rc geninfo_all_blocks=1 00:13:07.343 --rc geninfo_unexecuted_blocks=1 00:13:07.343 00:13:07.343 ' 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.343 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:07.344 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:15.484 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:15.484 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:15.484 Found net devices under 0000:31:00.0: cvl_0_0 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:15.484 Found net devices under 0000:31:00.1: cvl_0_1 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:15.484 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:15.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:13:15.485 00:13:15.485 --- 10.0.0.2 ping statistics --- 00:13:15.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.485 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:13:15.485 00:13:15.485 --- 10.0.0.1 ping statistics --- 00:13:15.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.485 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=1594794 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 1594794 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1594794 ']' 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:15.485 14:09:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.485 [2024-10-13 14:09:18.555398] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:13:15.485 [2024-10-13 14:09:18.555461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.485 [2024-10-13 14:09:18.697757] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:15.485 [2024-10-13 14:09:18.745319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.485 [2024-10-13 14:09:18.773511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.485 [2024-10-13 14:09:18.773555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.485 [2024-10-13 14:09:18.773564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.485 [2024-10-13 14:09:18.773571] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.485 [2024-10-13 14:09:18.773577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.485 [2024-10-13 14:09:18.775797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.485 [2024-10-13 14:09:18.775955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.485 [2024-10-13 14:09:18.776152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.485 [2024-10-13 14:09:18.776153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.747 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:15.747 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:13:15.747 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:15.747 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:15.747 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.747 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.747 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:15.747 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.747 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.747 [2024-10-13 14:09:19.439417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.747 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.747 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:15.747 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.747 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.009 [2024-10-13 14:09:19.455696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:16.009 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:16.270 14:09:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:16.531 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:16.531 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:16.532 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:16.793 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:16.793 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:16.793 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:16.793 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:16.793 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:16.793 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:16.793 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:17.054 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:17.054 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:17.054 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:17.054 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:17.054 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:17.054 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:17.054 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:17.054 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:17.054 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.054 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:17.054 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.054 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:17.054 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:17.054 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:17.054 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:17.054 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.054 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:17.054 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:17.054 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.316 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:17.316 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:17.316 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:17.316 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:17.316 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:17.316 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:17.316 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:17.316 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:17.316 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:17.316 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:17.316 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:17.316 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:17.316 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:17.316 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:17.316 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:17.578 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:17.578 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:17.578 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:17.578 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:17.578 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:17.578 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:17.838 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:17.839 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:17.839 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.839 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:17.839 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.839 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:17.839 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:17.839 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.839 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:17.839 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.839 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:17.839 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:17.839 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:17.839 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:17.839 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:17.839 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:17.839 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:18.099 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:18.099 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:18.099 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:18.099 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:18.099 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:18.099 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:18.099 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:18.099 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:18.099 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:18.099 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:18.099 rmmod nvme_tcp 00:13:18.099 rmmod nvme_fabrics 00:13:18.099 rmmod nvme_keyring 00:13:18.099 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:18.100 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:18.100 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:18.100 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 1594794 ']' 00:13:18.100 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 1594794 00:13:18.100 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1594794 ']' 00:13:18.100 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1594794 00:13:18.100 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:13:18.100 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:18.100 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1594794 00:13:18.100 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:18.100 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:18.100 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1594794' 00:13:18.100 killing process with pid 1594794 00:13:18.100 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1594794 00:13:18.100 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1594794 00:13:18.361 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:18.361 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:18.361 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:18.361 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:18.361 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:13:18.361 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:13:18.361 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:18.361 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:18.361 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:18.361 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.361 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.361 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.276 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:20.276 00:13:20.276 real 0m13.371s 00:13:20.276 user 0m15.331s 00:13:20.276 sys 0m6.699s 00:13:20.276 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.276 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:20.276 ************************************ 00:13:20.276 END TEST nvmf_referrals 00:13:20.276 ************************************ 00:13:20.276 14:09:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:20.276 14:09:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:20.276 14:09:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.276 14:09:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:20.276 ************************************ 00:13:20.276 START TEST nvmf_connect_disconnect 00:13:20.276 ************************************ 00:13:20.276 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:20.539 * Looking for test storage... 00:13:20.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:20.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.539 --rc genhtml_branch_coverage=1 00:13:20.539 --rc genhtml_function_coverage=1 00:13:20.539 --rc genhtml_legend=1 00:13:20.539 --rc geninfo_all_blocks=1 00:13:20.539 --rc geninfo_unexecuted_blocks=1 00:13:20.539 00:13:20.539 ' 00:13:20.539 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:20.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.539 --rc genhtml_branch_coverage=1 00:13:20.539 --rc genhtml_function_coverage=1 00:13:20.540 --rc genhtml_legend=1 00:13:20.540 --rc geninfo_all_blocks=1 00:13:20.540 --rc geninfo_unexecuted_blocks=1 00:13:20.540 00:13:20.540 ' 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:20.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.540 --rc genhtml_branch_coverage=1 00:13:20.540 --rc genhtml_function_coverage=1 00:13:20.540 --rc genhtml_legend=1 00:13:20.540 --rc geninfo_all_blocks=1 00:13:20.540 --rc geninfo_unexecuted_blocks=1 00:13:20.540 00:13:20.540 ' 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:20.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.540 --rc genhtml_branch_coverage=1 00:13:20.540 --rc genhtml_function_coverage=1 00:13:20.540 --rc genhtml_legend=1 00:13:20.540 --rc geninfo_all_blocks=1 00:13:20.540 --rc geninfo_unexecuted_blocks=1 00:13:20.540 00:13:20.540 ' 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:20.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:20.540 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:28.702 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.702 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:28.702 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:28.702 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:28.702 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:28.703 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:28.703 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.704 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.704 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:28.704 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.704 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:28.704 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:28.704 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:28.704 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:28.704 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.704 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.704 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:28.704 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:28.704 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:28.704 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:28.704 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:28.704 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.704 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:28.704 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.704 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:28.705 Found net devices under 0000:31:00.0: cvl_0_0 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:28.705 Found net devices under 0000:31:00.1: cvl_0_1 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.705 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:28.706 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:28.706 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.706 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.706 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:28.706 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:28.706 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.706 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.706 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.706 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.706 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:28.706 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.706 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.706 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.706 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:28.706 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:28.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:13:28.706 00:13:28.706 --- 10.0.0.2 ping statistics --- 00:13:28.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.706 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:13:28.706 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:13:28.707 00:13:28.707 --- 10.0.0.1 ping statistics --- 00:13:28.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.707 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=1599917 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 1599917 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1599917 ']' 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:28.707 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.708 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:28.708 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:28.708 [2024-10-13 14:09:32.042590] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:13:28.708 [2024-10-13 14:09:32.042655] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.708 [2024-10-13 14:09:32.184768] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:28.708 [2024-10-13 14:09:32.232660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.708 [2024-10-13 14:09:32.260863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.708 [2024-10-13 14:09:32.260908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.708 [2024-10-13 14:09:32.260916] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.708 [2024-10-13 14:09:32.260923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.708 [2024-10-13 14:09:32.260930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.708 [2024-10-13 14:09:32.263118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.708 [2024-10-13 14:09:32.263215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.708 [2024-10-13 14:09:32.263371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.708 [2024-10-13 14:09:32.263371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:29.284 [2024-10-13 14:09:32.921771] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.284 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:29.545 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.545 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.545 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.545 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:29.545 [2024-10-13 14:09:33.000734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.545 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.545 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:29.545 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:29.545 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:29.545 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:32.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:58.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.536 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:23.537 rmmod nvme_tcp 00:17:23.537 rmmod nvme_fabrics 00:17:23.537 rmmod nvme_keyring 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 1599917 ']' 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 1599917 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1599917 ']' 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1599917 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1599917 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1599917' 00:17:23.537 killing process with pid 1599917 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1599917 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1599917 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.537 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.447 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:25.447 00:17:25.447 real 4m4.988s 00:17:25.447 user 15m30.596s 00:17:25.447 sys 0m26.411s 00:17:25.447 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:25.447 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:25.447 ************************************ 00:17:25.447 END TEST nvmf_connect_disconnect 00:17:25.447 ************************************ 00:17:25.447 14:13:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:25.447 14:13:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:25.447 14:13:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:25.447 14:13:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:25.447 ************************************ 00:17:25.447 START TEST nvmf_multitarget 00:17:25.447 ************************************ 00:17:25.447 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:25.447 * Looking for test storage... 00:17:25.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.447 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:25.447 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:17:25.447 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:25.708 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:25.708 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:25.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.709 --rc genhtml_branch_coverage=1 00:17:25.709 --rc genhtml_function_coverage=1 00:17:25.709 --rc genhtml_legend=1 00:17:25.709 --rc geninfo_all_blocks=1 00:17:25.709 --rc geninfo_unexecuted_blocks=1 00:17:25.709 00:17:25.709 ' 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:25.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.709 --rc genhtml_branch_coverage=1 00:17:25.709 --rc genhtml_function_coverage=1 00:17:25.709 --rc genhtml_legend=1 00:17:25.709 --rc geninfo_all_blocks=1 00:17:25.709 --rc geninfo_unexecuted_blocks=1 00:17:25.709 00:17:25.709 ' 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:25.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.709 --rc genhtml_branch_coverage=1 00:17:25.709 --rc genhtml_function_coverage=1 00:17:25.709 --rc genhtml_legend=1 00:17:25.709 --rc geninfo_all_blocks=1 00:17:25.709 --rc geninfo_unexecuted_blocks=1 00:17:25.709 00:17:25.709 ' 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:25.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.709 --rc genhtml_branch_coverage=1 00:17:25.709 --rc genhtml_function_coverage=1 00:17:25.709 --rc genhtml_legend=1 00:17:25.709 --rc geninfo_all_blocks=1 00:17:25.709 --rc geninfo_unexecuted_blocks=1 00:17:25.709 00:17:25.709 ' 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:25.709 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.709 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:25.710 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:25.710 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:25.710 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:33.850 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:33.850 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:33.850 Found net devices under 0000:31:00.0: cvl_0_0 00:17:33.850 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:33.851 Found net devices under 0000:31:00.1: cvl_0_1 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:33.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:17:33.851 00:17:33.851 --- 10.0.0.2 ping statistics --- 00:17:33.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.851 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:33.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:17:33.851 00:17:33.851 --- 10.0.0.1 ping statistics --- 00:17:33.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.851 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=1651434 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 1651434 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1651434 ']' 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:33.851 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:33.851 [2024-10-13 14:13:37.011659] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:17:33.851 [2024-10-13 14:13:37.011730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.851 [2024-10-13 14:13:37.172387] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:33.851 [2024-10-13 14:13:37.220984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:33.851 [2024-10-13 14:13:37.250012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.851 [2024-10-13 14:13:37.250057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.851 [2024-10-13 14:13:37.250075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.851 [2024-10-13 14:13:37.250083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.851 [2024-10-13 14:13:37.250088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.851 [2024-10-13 14:13:37.252154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.851 [2024-10-13 14:13:37.252314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.851 [2024-10-13 14:13:37.252469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:33.851 [2024-10-13 14:13:37.252470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.423 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:34.423 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:17:34.423 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:34.423 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:34.423 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:34.423 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.423 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:34.423 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:34.423 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:34.423 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:34.423 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:34.423 "nvmf_tgt_1" 00:17:34.423 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:34.683 "nvmf_tgt_2" 00:17:34.683 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:34.683 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:34.683 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:34.683 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:34.944 true 00:17:34.944 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:34.944 true 00:17:34.944 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:34.944 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:35.205 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:35.205 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:35.205 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:35.205 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:35.205 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:35.205 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:35.205 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:35.205 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:35.205 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:35.205 rmmod nvme_tcp 00:17:35.205 rmmod nvme_fabrics 00:17:35.206 rmmod nvme_keyring 00:17:35.206 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:35.206 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:35.206 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:35.206 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 1651434 ']' 00:17:35.206 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 1651434 00:17:35.206 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1651434 ']' 00:17:35.206 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1651434 00:17:35.206 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:17:35.206 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:35.206 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1651434 00:17:35.206 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:35.206 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:35.206 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1651434' 00:17:35.206 killing process with pid 1651434 00:17:35.206 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1651434 00:17:35.206 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1651434 00:17:35.466 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:35.466 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:35.466 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:35.466 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:35.466 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:35.466 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:17:35.466 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:17:35.466 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:35.466 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:35.466 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.466 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.466 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.379 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:37.379 00:17:37.379 real 0m12.012s 00:17:37.379 user 0m9.956s 00:17:37.379 sys 0m6.258s 00:17:37.379 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:37.379 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:37.379 ************************************ 00:17:37.379 END TEST nvmf_multitarget 00:17:37.379 ************************************ 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:37.640 ************************************ 00:17:37.640 START TEST nvmf_rpc 00:17:37.640 ************************************ 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:37.640 * Looking for test storage... 00:17:37.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.640 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:37.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.902 --rc genhtml_branch_coverage=1 00:17:37.902 --rc genhtml_function_coverage=1 00:17:37.902 --rc genhtml_legend=1 00:17:37.902 --rc geninfo_all_blocks=1 00:17:37.902 --rc geninfo_unexecuted_blocks=1 00:17:37.902 00:17:37.902 ' 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:37.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.902 --rc genhtml_branch_coverage=1 00:17:37.902 --rc genhtml_function_coverage=1 00:17:37.902 --rc genhtml_legend=1 00:17:37.902 --rc geninfo_all_blocks=1 00:17:37.902 --rc geninfo_unexecuted_blocks=1 00:17:37.902 00:17:37.902 ' 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:37.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.902 --rc genhtml_branch_coverage=1 00:17:37.902 --rc genhtml_function_coverage=1 00:17:37.902 --rc genhtml_legend=1 00:17:37.902 --rc geninfo_all_blocks=1 00:17:37.902 --rc geninfo_unexecuted_blocks=1 00:17:37.902 00:17:37.902 ' 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:37.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.902 --rc genhtml_branch_coverage=1 00:17:37.902 --rc genhtml_function_coverage=1 00:17:37.902 --rc genhtml_legend=1 00:17:37.902 --rc geninfo_all_blocks=1 00:17:37.902 --rc geninfo_unexecuted_blocks=1 00:17:37.902 00:17:37.902 ' 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:37.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:37.902 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:46.048 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:46.049 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:46.049 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:46.049 Found net devices under 0000:31:00.0: cvl_0_0 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:46.049 Found net devices under 0000:31:00.1: cvl_0_1 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.049 14:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:46.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:17:46.049 00:17:46.049 --- 10.0.0.2 ping statistics --- 00:17:46.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.049 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:17:46.049 00:17:46.049 --- 10.0.0.1 ping statistics --- 00:17:46.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.049 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=1656061 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 1656061 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1656061 ']' 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:46.049 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.049 [2024-10-13 14:13:49.246969] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:17:46.049 [2024-10-13 14:13:49.247039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.049 [2024-10-13 14:13:49.390763] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:46.049 [2024-10-13 14:13:49.439618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:46.049 [2024-10-13 14:13:49.467847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.049 [2024-10-13 14:13:49.467896] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.049 [2024-10-13 14:13:49.467904] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.049 [2024-10-13 14:13:49.467911] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.049 [2024-10-13 14:13:49.467917] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.049 [2024-10-13 14:13:49.469863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.049 [2024-10-13 14:13:49.470022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.049 [2024-10-13 14:13:49.470180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.050 [2024-10-13 14:13:49.470180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:46.623 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:46.623 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:46.623 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:46.623 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:46.623 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.623 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.623 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:46.623 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.623 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.623 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.623 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:46.623 "tick_rate": 2394400000, 00:17:46.623 "poll_groups": [ 00:17:46.623 { 00:17:46.623 "name": "nvmf_tgt_poll_group_000", 00:17:46.623 "admin_qpairs": 0, 00:17:46.623 "io_qpairs": 0, 00:17:46.623 "current_admin_qpairs": 0, 00:17:46.623 "current_io_qpairs": 0, 00:17:46.623 "pending_bdev_io": 0, 00:17:46.623 "completed_nvme_io": 0, 00:17:46.623 "transports": [] 00:17:46.623 }, 00:17:46.623 { 00:17:46.623 "name": "nvmf_tgt_poll_group_001", 00:17:46.623 "admin_qpairs": 0, 00:17:46.623 "io_qpairs": 0, 00:17:46.623 "current_admin_qpairs": 0, 00:17:46.623 "current_io_qpairs": 0, 00:17:46.624 "pending_bdev_io": 0, 00:17:46.624 "completed_nvme_io": 0, 00:17:46.624 "transports": [] 00:17:46.624 }, 00:17:46.624 { 00:17:46.624 "name": "nvmf_tgt_poll_group_002", 00:17:46.624 "admin_qpairs": 0, 00:17:46.624 "io_qpairs": 0, 00:17:46.624 "current_admin_qpairs": 0, 00:17:46.624 "current_io_qpairs": 0, 00:17:46.624 "pending_bdev_io": 0, 00:17:46.624 "completed_nvme_io": 0, 00:17:46.624 "transports": [] 00:17:46.624 }, 00:17:46.624 { 00:17:46.624 "name": "nvmf_tgt_poll_group_003", 00:17:46.624 "admin_qpairs": 0, 00:17:46.624 "io_qpairs": 0, 00:17:46.624 "current_admin_qpairs": 0, 00:17:46.624 "current_io_qpairs": 0, 00:17:46.624 "pending_bdev_io": 0, 00:17:46.624 "completed_nvme_io": 0, 00:17:46.624 "transports": [] 00:17:46.624 } 00:17:46.624 ] 00:17:46.624 }' 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.624 [2024-10-13 14:13:50.245026] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:46.624 "tick_rate": 2394400000, 00:17:46.624 "poll_groups": [ 00:17:46.624 { 00:17:46.624 "name": "nvmf_tgt_poll_group_000", 00:17:46.624 "admin_qpairs": 0, 00:17:46.624 "io_qpairs": 0, 00:17:46.624 "current_admin_qpairs": 0, 00:17:46.624 "current_io_qpairs": 0, 00:17:46.624 "pending_bdev_io": 0, 00:17:46.624 "completed_nvme_io": 0, 00:17:46.624 "transports": [ 00:17:46.624 { 00:17:46.624 "trtype": "TCP" 00:17:46.624 } 00:17:46.624 ] 00:17:46.624 }, 00:17:46.624 { 00:17:46.624 "name": "nvmf_tgt_poll_group_001", 00:17:46.624 "admin_qpairs": 0, 00:17:46.624 "io_qpairs": 0, 00:17:46.624 "current_admin_qpairs": 0, 00:17:46.624 "current_io_qpairs": 0, 00:17:46.624 "pending_bdev_io": 0, 00:17:46.624 "completed_nvme_io": 0, 00:17:46.624 "transports": [ 00:17:46.624 { 00:17:46.624 "trtype": "TCP" 00:17:46.624 } 00:17:46.624 ] 00:17:46.624 }, 00:17:46.624 { 00:17:46.624 "name": "nvmf_tgt_poll_group_002", 00:17:46.624 "admin_qpairs": 0, 00:17:46.624 "io_qpairs": 0, 00:17:46.624 "current_admin_qpairs": 0, 00:17:46.624 "current_io_qpairs": 0, 00:17:46.624 "pending_bdev_io": 0, 00:17:46.624 "completed_nvme_io": 0, 00:17:46.624 "transports": [ 00:17:46.624 { 00:17:46.624 "trtype": "TCP" 00:17:46.624 } 00:17:46.624 ] 00:17:46.624 }, 00:17:46.624 { 00:17:46.624 "name": "nvmf_tgt_poll_group_003", 00:17:46.624 "admin_qpairs": 0, 00:17:46.624 "io_qpairs": 0, 00:17:46.624 "current_admin_qpairs": 0, 00:17:46.624 "current_io_qpairs": 0, 00:17:46.624 "pending_bdev_io": 0, 00:17:46.624 "completed_nvme_io": 0, 00:17:46.624 "transports": [ 00:17:46.624 { 00:17:46.624 "trtype": "TCP" 00:17:46.624 } 00:17:46.624 ] 00:17:46.624 } 00:17:46.624 ] 00:17:46.624 }' 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:46.624 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:46.885 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:46.885 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:46.885 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:46.885 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:46.885 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:46.885 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:46.885 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:46.885 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.885 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.885 Malloc1 00:17:46.885 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.885 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:46.885 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.885 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.886 [2024-10-13 14:13:50.459647] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:17:46.886 [2024-10-13 14:13:50.496712] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:17:46.886 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:46.886 could not add new controller: failed to write to nvme-fabrics device 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.886 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:48.799 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:48.799 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:48.799 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:48.799 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:48.799 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:50.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:50.710 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.711 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:50.711 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.711 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:50.711 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:50.711 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:50.711 [2024-10-13 14:13:54.372133] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:17:50.711 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:50.711 could not add new controller: failed to write to nvme-fabrics device 00:17:50.711 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:50.711 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:50.711 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:50.711 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:50.711 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:50.711 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.711 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.711 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.711 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:52.623 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:52.623 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:52.623 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:52.623 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:52.623 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:54.533 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:54.533 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:54.533 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:54.533 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:54.533 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:54.533 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:54.533 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:54.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.533 [2024-10-13 14:13:58.120656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.533 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:56.441 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:56.441 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:56.441 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:56.441 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:56.441 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:58.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.351 [2024-10-13 14:14:01.919907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.351 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:59.733 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:59.733 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:59.733 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:59.733 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:59.733 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:02.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.278 [2024-10-13 14:14:05.739474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.278 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:03.664 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:03.664 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:03.664 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:03.664 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:03.664 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:05.692 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:05.692 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:05.692 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:05.692 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:05.692 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:05.692 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:05.692 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:05.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.972 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:05.972 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:05.972 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:05.972 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:05.972 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:05.972 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.973 [2024-10-13 14:14:09.513747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.973 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:07.884 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:07.884 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:07.884 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:07.884 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:07.884 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:09.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.799 [2024-10-13 14:14:13.274650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.799 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:11.184 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:11.184 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:11.184 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:11.184 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:11.184 14:14:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:13.096 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:13.096 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:13.096 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:13.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.357 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.357 [2024-10-13 14:14:17.005167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.357 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.618 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.618 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.618 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.618 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.618 [2024-10-13 14:14:17.077141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.618 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.618 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:13.618 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 [2024-10-13 14:14:17.149158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 [2024-10-13 14:14:17.221218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 [2024-10-13 14:14:17.293297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.619 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:13.881 "tick_rate": 2394400000, 00:18:13.881 "poll_groups": [ 00:18:13.881 { 00:18:13.881 "name": "nvmf_tgt_poll_group_000", 00:18:13.881 "admin_qpairs": 0, 00:18:13.881 "io_qpairs": 224, 00:18:13.881 "current_admin_qpairs": 0, 00:18:13.881 "current_io_qpairs": 0, 00:18:13.881 "pending_bdev_io": 0, 00:18:13.881 "completed_nvme_io": 243, 00:18:13.881 "transports": [ 00:18:13.881 { 00:18:13.881 "trtype": "TCP" 00:18:13.881 } 00:18:13.881 ] 00:18:13.881 }, 00:18:13.881 { 00:18:13.881 "name": "nvmf_tgt_poll_group_001", 00:18:13.881 "admin_qpairs": 1, 00:18:13.881 "io_qpairs": 223, 00:18:13.881 "current_admin_qpairs": 0, 00:18:13.881 "current_io_qpairs": 0, 00:18:13.881 "pending_bdev_io": 0, 00:18:13.881 "completed_nvme_io": 378, 00:18:13.881 "transports": [ 00:18:13.881 { 00:18:13.881 "trtype": "TCP" 00:18:13.881 } 00:18:13.881 ] 00:18:13.881 }, 00:18:13.881 { 00:18:13.881 "name": "nvmf_tgt_poll_group_002", 00:18:13.881 "admin_qpairs": 6, 00:18:13.881 "io_qpairs": 218, 00:18:13.881 "current_admin_qpairs": 0, 00:18:13.881 "current_io_qpairs": 0, 00:18:13.881 "pending_bdev_io": 0, 00:18:13.881 "completed_nvme_io": 361, 00:18:13.881 "transports": [ 00:18:13.881 { 00:18:13.881 "trtype": "TCP" 00:18:13.881 } 00:18:13.881 ] 00:18:13.881 }, 00:18:13.881 { 00:18:13.881 "name": "nvmf_tgt_poll_group_003", 00:18:13.881 "admin_qpairs": 0, 00:18:13.881 "io_qpairs": 224, 00:18:13.881 "current_admin_qpairs": 0, 00:18:13.881 "current_io_qpairs": 0, 00:18:13.881 "pending_bdev_io": 0, 00:18:13.881 "completed_nvme_io": 257, 00:18:13.881 "transports": [ 00:18:13.881 { 00:18:13.881 "trtype": "TCP" 00:18:13.881 } 00:18:13.881 ] 00:18:13.881 } 00:18:13.881 ] 00:18:13.881 }' 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:13.881 rmmod nvme_tcp 00:18:13.881 rmmod nvme_fabrics 00:18:13.881 rmmod nvme_keyring 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 1656061 ']' 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 1656061 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1656061 ']' 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1656061 00:18:13.881 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:18:13.882 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:13.882 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1656061 00:18:14.142 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:14.142 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:14.142 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1656061' 00:18:14.142 killing process with pid 1656061 00:18:14.142 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1656061 00:18:14.142 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1656061 00:18:14.142 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:14.142 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:14.142 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:14.142 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:18:14.142 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:14.142 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:18:14.142 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:18:14.142 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:14.142 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:14.142 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.142 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.142 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.694 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:16.694 00:18:16.694 real 0m38.639s 00:18:16.694 user 1m54.983s 00:18:16.694 sys 0m8.092s 00:18:16.694 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:16.694 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.694 ************************************ 00:18:16.694 END TEST nvmf_rpc 00:18:16.694 ************************************ 00:18:16.694 14:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:16.694 14:14:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:16.694 14:14:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:16.694 14:14:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:16.694 ************************************ 00:18:16.694 START TEST nvmf_invalid 00:18:16.694 ************************************ 00:18:16.694 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:16.694 * Looking for test storage... 00:18:16.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:16.694 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:16.694 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:18:16.694 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:16.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.694 --rc genhtml_branch_coverage=1 00:18:16.694 --rc genhtml_function_coverage=1 00:18:16.694 --rc genhtml_legend=1 00:18:16.694 --rc geninfo_all_blocks=1 00:18:16.694 --rc geninfo_unexecuted_blocks=1 00:18:16.694 00:18:16.694 ' 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:16.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.694 --rc genhtml_branch_coverage=1 00:18:16.694 --rc genhtml_function_coverage=1 00:18:16.694 --rc genhtml_legend=1 00:18:16.694 --rc geninfo_all_blocks=1 00:18:16.694 --rc geninfo_unexecuted_blocks=1 00:18:16.694 00:18:16.694 ' 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:16.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.694 --rc genhtml_branch_coverage=1 00:18:16.694 --rc genhtml_function_coverage=1 00:18:16.694 --rc genhtml_legend=1 00:18:16.694 --rc geninfo_all_blocks=1 00:18:16.694 --rc geninfo_unexecuted_blocks=1 00:18:16.694 00:18:16.694 ' 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:16.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.694 --rc genhtml_branch_coverage=1 00:18:16.694 --rc genhtml_function_coverage=1 00:18:16.694 --rc genhtml_legend=1 00:18:16.694 --rc geninfo_all_blocks=1 00:18:16.694 --rc geninfo_unexecuted_blocks=1 00:18:16.694 00:18:16.694 ' 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.694 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:16.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:18:16.695 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:24.837 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:24.837 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:24.837 Found net devices under 0000:31:00.0: cvl_0_0 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:24.837 Found net devices under 0000:31:00.1: cvl_0_1 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:24.837 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:24.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:18:24.838 00:18:24.838 --- 10.0.0.2 ping statistics --- 00:18:24.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.838 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:24.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:18:24.838 00:18:24.838 --- 10.0.0.1 ping statistics --- 00:18:24.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.838 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=1665994 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 1665994 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1665994 ']' 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:24.838 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:24.838 [2024-10-13 14:14:27.844520] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:18:24.838 [2024-10-13 14:14:27.844584] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.838 [2024-10-13 14:14:27.986674] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:24.838 [2024-10-13 14:14:28.036313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:24.838 [2024-10-13 14:14:28.064393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.838 [2024-10-13 14:14:28.064436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.838 [2024-10-13 14:14:28.064444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.838 [2024-10-13 14:14:28.064451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.838 [2024-10-13 14:14:28.064458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.838 [2024-10-13 14:14:28.066370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.838 [2024-10-13 14:14:28.066528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.838 [2024-10-13 14:14:28.066678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.838 [2024-10-13 14:14:28.066679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:25.100 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:25.100 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:18:25.100 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:25.100 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:25.100 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:25.100 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.100 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:25.100 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode9571 00:18:25.361 [2024-10-13 14:14:28.893626] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:25.361 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:25.361 { 00:18:25.361 "nqn": "nqn.2016-06.io.spdk:cnode9571", 00:18:25.361 "tgt_name": "foobar", 00:18:25.361 "method": "nvmf_create_subsystem", 00:18:25.361 "req_id": 1 00:18:25.361 } 00:18:25.361 Got JSON-RPC error response 00:18:25.361 response: 00:18:25.361 { 00:18:25.361 "code": -32603, 00:18:25.361 "message": "Unable to find target foobar" 00:18:25.361 }' 00:18:25.361 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:25.361 { 00:18:25.361 "nqn": "nqn.2016-06.io.spdk:cnode9571", 00:18:25.361 "tgt_name": "foobar", 00:18:25.361 "method": "nvmf_create_subsystem", 00:18:25.361 "req_id": 1 00:18:25.361 } 00:18:25.361 Got JSON-RPC error response 00:18:25.361 response: 00:18:25.361 { 00:18:25.361 "code": -32603, 00:18:25.361 "message": "Unable to find target foobar" 00:18:25.361 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:25.361 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:25.361 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22138 00:18:25.622 [2024-10-13 14:14:29.105954] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22138: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:25.622 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:25.622 { 00:18:25.622 "nqn": "nqn.2016-06.io.spdk:cnode22138", 00:18:25.622 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:25.622 "method": "nvmf_create_subsystem", 00:18:25.622 "req_id": 1 00:18:25.622 } 00:18:25.622 Got JSON-RPC error response 00:18:25.622 response: 00:18:25.622 { 00:18:25.622 "code": -32602, 00:18:25.622 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:25.622 }' 00:18:25.622 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:25.622 { 00:18:25.622 "nqn": "nqn.2016-06.io.spdk:cnode22138", 00:18:25.622 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:25.622 "method": "nvmf_create_subsystem", 00:18:25.622 "req_id": 1 00:18:25.622 } 00:18:25.622 Got JSON-RPC error response 00:18:25.622 response: 00:18:25.622 { 00:18:25.622 "code": -32602, 00:18:25.622 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:25.622 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:25.622 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:25.622 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14530 00:18:25.622 [2024-10-13 14:14:29.318209] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14530: invalid model number 'SPDK_Controller' 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:25.884 { 00:18:25.884 "nqn": "nqn.2016-06.io.spdk:cnode14530", 00:18:25.884 "model_number": "SPDK_Controller\u001f", 00:18:25.884 "method": "nvmf_create_subsystem", 00:18:25.884 "req_id": 1 00:18:25.884 } 00:18:25.884 Got JSON-RPC error response 00:18:25.884 response: 00:18:25.884 { 00:18:25.884 "code": -32602, 00:18:25.884 "message": "Invalid MN SPDK_Controller\u001f" 00:18:25.884 }' 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:25.884 { 00:18:25.884 "nqn": "nqn.2016-06.io.spdk:cnode14530", 00:18:25.884 "model_number": "SPDK_Controller\u001f", 00:18:25.884 "method": "nvmf_create_subsystem", 00:18:25.884 "req_id": 1 00:18:25.884 } 00:18:25.884 Got JSON-RPC error response 00:18:25.884 response: 00:18:25.884 { 00:18:25.884 "code": -32602, 00:18:25.884 "message": "Invalid MN SPDK_Controller\u001f" 00:18:25.884 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.884 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ b == \- ]] 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'b?Iu%mL9`:h-{"~h\ ]GJ' 00:18:25.885 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'b?Iu%mL9`:h-{"~h\ ]GJ' nqn.2016-06.io.spdk:cnode3105 00:18:26.147 [2024-10-13 14:14:29.698804] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3105: invalid serial number 'b?Iu%mL9`:h-{"~h\ ]GJ' 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:26.147 { 00:18:26.147 "nqn": "nqn.2016-06.io.spdk:cnode3105", 00:18:26.147 "serial_number": "b?Iu%mL9`:h-{\"~h\\ ]GJ", 00:18:26.147 "method": "nvmf_create_subsystem", 00:18:26.147 "req_id": 1 00:18:26.147 } 00:18:26.147 Got JSON-RPC error response 00:18:26.147 response: 00:18:26.147 { 00:18:26.147 "code": -32602, 00:18:26.147 "message": "Invalid SN b?Iu%mL9`:h-{\"~h\\ ]GJ" 00:18:26.147 }' 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:26.147 { 00:18:26.147 "nqn": "nqn.2016-06.io.spdk:cnode3105", 00:18:26.147 "serial_number": "b?Iu%mL9`:h-{\"~h\\ ]GJ", 00:18:26.147 "method": "nvmf_create_subsystem", 00:18:26.147 "req_id": 1 00:18:26.147 } 00:18:26.147 Got JSON-RPC error response 00:18:26.147 response: 00:18:26.147 { 00:18:26.147 "code": -32602, 00:18:26.147 "message": "Invalid SN b?Iu%mL9`:h-{\"~h\\ ]GJ" 00:18:26.147 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:18:26.147 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:18:26.410 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:18:26.410 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.410 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.410 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ . == \- ]] 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '.=-n5l7FJ1L[c},6Fq$oN#:aHP%I?LCpgyA]S@5vV' 00:18:26.411 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '.=-n5l7FJ1L[c},6Fq$oN#:aHP%I?LCpgyA]S@5vV' nqn.2016-06.io.spdk:cnode7173 00:18:26.674 [2024-10-13 14:14:30.247655] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7173: invalid model number '.=-n5l7FJ1L[c},6Fq$oN#:aHP%I?LCpgyA]S@5vV' 00:18:26.674 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:18:26.674 { 00:18:26.674 "nqn": "nqn.2016-06.io.spdk:cnode7173", 00:18:26.674 "model_number": ".=-n5l7FJ1L[c},6Fq$oN#:aHP%I?LCpgyA]S@5vV", 00:18:26.674 "method": "nvmf_create_subsystem", 00:18:26.674 "req_id": 1 00:18:26.674 } 00:18:26.674 Got JSON-RPC error response 00:18:26.674 response: 00:18:26.674 { 00:18:26.674 "code": -32602, 00:18:26.674 "message": "Invalid MN .=-n5l7FJ1L[c},6Fq$oN#:aHP%I?LCpgyA]S@5vV" 00:18:26.674 }' 00:18:26.674 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:18:26.674 { 00:18:26.674 "nqn": "nqn.2016-06.io.spdk:cnode7173", 00:18:26.674 "model_number": ".=-n5l7FJ1L[c},6Fq$oN#:aHP%I?LCpgyA]S@5vV", 00:18:26.674 "method": "nvmf_create_subsystem", 00:18:26.674 "req_id": 1 00:18:26.674 } 00:18:26.674 Got JSON-RPC error response 00:18:26.674 response: 00:18:26.674 { 00:18:26.674 "code": -32602, 00:18:26.674 "message": "Invalid MN .=-n5l7FJ1L[c},6Fq$oN#:aHP%I?LCpgyA]S@5vV" 00:18:26.674 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:26.674 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:18:26.935 [2024-10-13 14:14:30.452104] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.935 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:27.197 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:18:27.197 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:18:27.197 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:18:27.197 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:18:27.197 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:18:27.197 [2024-10-13 14:14:30.852480] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:27.197 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:18:27.197 { 00:18:27.197 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:27.197 "listen_address": { 00:18:27.197 "trtype": "tcp", 00:18:27.197 "traddr": "", 00:18:27.197 "trsvcid": "4421" 00:18:27.197 }, 00:18:27.197 "method": "nvmf_subsystem_remove_listener", 00:18:27.197 "req_id": 1 00:18:27.197 } 00:18:27.197 Got JSON-RPC error response 00:18:27.197 response: 00:18:27.197 { 00:18:27.197 "code": -32602, 00:18:27.197 "message": "Invalid parameters" 00:18:27.197 }' 00:18:27.197 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:18:27.197 { 00:18:27.197 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:27.197 "listen_address": { 00:18:27.197 "trtype": "tcp", 00:18:27.197 "traddr": "", 00:18:27.197 "trsvcid": "4421" 00:18:27.197 }, 00:18:27.197 "method": "nvmf_subsystem_remove_listener", 00:18:27.197 "req_id": 1 00:18:27.197 } 00:18:27.197 Got JSON-RPC error response 00:18:27.197 response: 00:18:27.197 { 00:18:27.197 "code": -32602, 00:18:27.197 "message": "Invalid parameters" 00:18:27.197 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:27.197 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode956 -i 0 00:18:27.457 [2024-10-13 14:14:31.040612] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode956: invalid cntlid range [0-65519] 00:18:27.457 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:18:27.457 { 00:18:27.457 "nqn": "nqn.2016-06.io.spdk:cnode956", 00:18:27.457 "min_cntlid": 0, 00:18:27.457 "method": "nvmf_create_subsystem", 00:18:27.457 "req_id": 1 00:18:27.457 } 00:18:27.457 Got JSON-RPC error response 00:18:27.457 response: 00:18:27.457 { 00:18:27.457 "code": -32602, 00:18:27.457 "message": "Invalid cntlid range [0-65519]" 00:18:27.457 }' 00:18:27.457 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:18:27.457 { 00:18:27.457 "nqn": "nqn.2016-06.io.spdk:cnode956", 00:18:27.457 "min_cntlid": 0, 00:18:27.457 "method": "nvmf_create_subsystem", 00:18:27.457 "req_id": 1 00:18:27.457 } 00:18:27.457 Got JSON-RPC error response 00:18:27.457 response: 00:18:27.457 { 00:18:27.457 "code": -32602, 00:18:27.457 "message": "Invalid cntlid range [0-65519]" 00:18:27.457 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:27.457 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3032 -i 65520 00:18:27.718 [2024-10-13 14:14:31.212740] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3032: invalid cntlid range [65520-65519] 00:18:27.718 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:18:27.718 { 00:18:27.718 "nqn": "nqn.2016-06.io.spdk:cnode3032", 00:18:27.718 "min_cntlid": 65520, 00:18:27.718 "method": "nvmf_create_subsystem", 00:18:27.718 "req_id": 1 00:18:27.718 } 00:18:27.718 Got JSON-RPC error response 00:18:27.718 response: 00:18:27.718 { 00:18:27.718 "code": -32602, 00:18:27.718 "message": "Invalid cntlid range [65520-65519]" 00:18:27.718 }' 00:18:27.718 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:18:27.718 { 00:18:27.718 "nqn": "nqn.2016-06.io.spdk:cnode3032", 00:18:27.718 "min_cntlid": 65520, 00:18:27.718 "method": "nvmf_create_subsystem", 00:18:27.718 "req_id": 1 00:18:27.718 } 00:18:27.718 Got JSON-RPC error response 00:18:27.718 response: 00:18:27.718 { 00:18:27.718 "code": -32602, 00:18:27.718 "message": "Invalid cntlid range [65520-65519]" 00:18:27.718 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:27.718 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27712 -I 0 00:18:27.718 [2024-10-13 14:14:31.396868] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27712: invalid cntlid range [1-0] 00:18:27.978 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:18:27.978 { 00:18:27.978 "nqn": "nqn.2016-06.io.spdk:cnode27712", 00:18:27.978 "max_cntlid": 0, 00:18:27.978 "method": "nvmf_create_subsystem", 00:18:27.978 "req_id": 1 00:18:27.978 } 00:18:27.978 Got JSON-RPC error response 00:18:27.978 response: 00:18:27.978 { 00:18:27.978 "code": -32602, 00:18:27.978 "message": "Invalid cntlid range [1-0]" 00:18:27.978 }' 00:18:27.978 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:18:27.978 { 00:18:27.978 "nqn": "nqn.2016-06.io.spdk:cnode27712", 00:18:27.978 "max_cntlid": 0, 00:18:27.978 "method": "nvmf_create_subsystem", 00:18:27.978 "req_id": 1 00:18:27.978 } 00:18:27.978 Got JSON-RPC error response 00:18:27.978 response: 00:18:27.978 { 00:18:27.979 "code": -32602, 00:18:27.979 "message": "Invalid cntlid range [1-0]" 00:18:27.979 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:27.979 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2784 -I 65520 00:18:27.979 [2024-10-13 14:14:31.577041] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2784: invalid cntlid range [1-65520] 00:18:27.979 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:18:27.979 { 00:18:27.979 "nqn": "nqn.2016-06.io.spdk:cnode2784", 00:18:27.979 "max_cntlid": 65520, 00:18:27.979 "method": "nvmf_create_subsystem", 00:18:27.979 "req_id": 1 00:18:27.979 } 00:18:27.979 Got JSON-RPC error response 00:18:27.979 response: 00:18:27.979 { 00:18:27.979 "code": -32602, 00:18:27.979 "message": "Invalid cntlid range [1-65520]" 00:18:27.979 }' 00:18:27.979 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:18:27.979 { 00:18:27.979 "nqn": "nqn.2016-06.io.spdk:cnode2784", 00:18:27.979 "max_cntlid": 65520, 00:18:27.979 "method": "nvmf_create_subsystem", 00:18:27.979 "req_id": 1 00:18:27.979 } 00:18:27.979 Got JSON-RPC error response 00:18:27.979 response: 00:18:27.979 { 00:18:27.979 "code": -32602, 00:18:27.979 "message": "Invalid cntlid range [1-65520]" 00:18:27.979 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:27.979 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32386 -i 6 -I 5 00:18:28.240 [2024-10-13 14:14:31.765188] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32386: invalid cntlid range [6-5] 00:18:28.240 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:18:28.240 { 00:18:28.240 "nqn": "nqn.2016-06.io.spdk:cnode32386", 00:18:28.240 "min_cntlid": 6, 00:18:28.240 "max_cntlid": 5, 00:18:28.240 "method": "nvmf_create_subsystem", 00:18:28.240 "req_id": 1 00:18:28.240 } 00:18:28.240 Got JSON-RPC error response 00:18:28.240 response: 00:18:28.240 { 00:18:28.240 "code": -32602, 00:18:28.240 "message": "Invalid cntlid range [6-5]" 00:18:28.240 }' 00:18:28.240 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:18:28.240 { 00:18:28.240 "nqn": "nqn.2016-06.io.spdk:cnode32386", 00:18:28.240 "min_cntlid": 6, 00:18:28.240 "max_cntlid": 5, 00:18:28.240 "method": "nvmf_create_subsystem", 00:18:28.240 "req_id": 1 00:18:28.240 } 00:18:28.240 Got JSON-RPC error response 00:18:28.240 response: 00:18:28.240 { 00:18:28.240 "code": -32602, 00:18:28.240 "message": "Invalid cntlid range [6-5]" 00:18:28.240 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:28.240 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:28.240 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:18:28.240 { 00:18:28.240 "name": "foobar", 00:18:28.240 "method": "nvmf_delete_target", 00:18:28.240 "req_id": 1 00:18:28.240 } 00:18:28.240 Got JSON-RPC error response 00:18:28.240 response: 00:18:28.240 { 00:18:28.240 "code": -32602, 00:18:28.240 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:28.240 }' 00:18:28.240 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:18:28.240 { 00:18:28.240 "name": "foobar", 00:18:28.240 "method": "nvmf_delete_target", 00:18:28.240 "req_id": 1 00:18:28.240 } 00:18:28.240 Got JSON-RPC error response 00:18:28.240 response: 00:18:28.240 { 00:18:28.240 "code": -32602, 00:18:28.240 "message": "The specified target doesn't exist, cannot delete it." 00:18:28.240 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:28.240 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:28.240 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:18:28.240 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:28.240 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:18:28.240 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:28.240 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:18:28.240 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:28.240 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:28.240 rmmod nvme_tcp 00:18:28.240 rmmod nvme_fabrics 00:18:28.240 rmmod nvme_keyring 00:18:28.501 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:28.501 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:18:28.501 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:18:28.501 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 1665994 ']' 00:18:28.501 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 1665994 00:18:28.501 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1665994 ']' 00:18:28.501 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1665994 00:18:28.501 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:18:28.501 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:28.501 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1665994 00:18:28.501 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:28.501 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:28.501 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1665994' 00:18:28.501 killing process with pid 1665994 00:18:28.501 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1665994 00:18:28.501 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1665994 00:18:28.501 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:28.501 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:28.501 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:28.501 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:18:28.501 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:28.501 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:18:28.501 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:18:28.501 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:28.501 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:28.501 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.501 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:28.501 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.048 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:31.048 00:18:31.048 real 0m14.354s 00:18:31.048 user 0m20.981s 00:18:31.048 sys 0m6.822s 00:18:31.048 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:31.048 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:31.048 ************************************ 00:18:31.048 END TEST nvmf_invalid 00:18:31.048 ************************************ 00:18:31.048 14:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:31.048 14:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:31.048 14:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:31.048 14:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:31.048 ************************************ 00:18:31.048 START TEST nvmf_connect_stress 00:18:31.048 ************************************ 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:31.049 * Looking for test storage... 00:18:31.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:31.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.049 --rc genhtml_branch_coverage=1 00:18:31.049 --rc genhtml_function_coverage=1 00:18:31.049 --rc genhtml_legend=1 00:18:31.049 --rc geninfo_all_blocks=1 00:18:31.049 --rc geninfo_unexecuted_blocks=1 00:18:31.049 00:18:31.049 ' 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:31.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.049 --rc genhtml_branch_coverage=1 00:18:31.049 --rc genhtml_function_coverage=1 00:18:31.049 --rc genhtml_legend=1 00:18:31.049 --rc geninfo_all_blocks=1 00:18:31.049 --rc geninfo_unexecuted_blocks=1 00:18:31.049 00:18:31.049 ' 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:31.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.049 --rc genhtml_branch_coverage=1 00:18:31.049 --rc genhtml_function_coverage=1 00:18:31.049 --rc genhtml_legend=1 00:18:31.049 --rc geninfo_all_blocks=1 00:18:31.049 --rc geninfo_unexecuted_blocks=1 00:18:31.049 00:18:31.049 ' 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:31.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.049 --rc genhtml_branch_coverage=1 00:18:31.049 --rc genhtml_function_coverage=1 00:18:31.049 --rc genhtml_legend=1 00:18:31.049 --rc geninfo_all_blocks=1 00:18:31.049 --rc geninfo_unexecuted_blocks=1 00:18:31.049 00:18:31.049 ' 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:31.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:31.049 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:31.050 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:31.050 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.050 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:31.050 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.050 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:31.050 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:31.050 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:31.050 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:39.197 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:39.197 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:39.197 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:39.198 Found net devices under 0000:31:00.0: cvl_0_0 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:39.198 Found net devices under 0000:31:00.1: cvl_0_1 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:39.198 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:39.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:18:39.198 00:18:39.198 --- 10.0.0.2 ping statistics --- 00:18:39.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.198 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:39.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:18:39.198 00:18:39.198 --- 10.0.0.1 ping statistics --- 00:18:39.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.198 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=1671240 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 1671240 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1671240 ']' 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.198 14:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.198 [2024-10-13 14:14:42.353447] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:18:39.198 [2024-10-13 14:14:42.353510] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.198 [2024-10-13 14:14:42.495792] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:39.198 [2024-10-13 14:14:42.543158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:39.198 [2024-10-13 14:14:42.570329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.198 [2024-10-13 14:14:42.570375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.198 [2024-10-13 14:14:42.570384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.198 [2024-10-13 14:14:42.570392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.198 [2024-10-13 14:14:42.570398] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.198 [2024-10-13 14:14:42.572206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.198 [2024-10-13 14:14:42.572432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:39.198 [2024-10-13 14:14:42.572433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.460 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:39.460 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:18:39.460 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:39.460 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:39.460 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.723 [2024-10-13 14:14:43.217530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.723 [2024-10-13 14:14:43.243185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.723 NULL1 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1671554 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:39.723 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:39.724 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:39.724 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.724 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.724 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.296 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.296 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:40.296 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.296 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.296 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.557 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.557 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:40.557 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.557 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.557 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.818 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.818 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:40.818 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.818 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.818 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.079 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.079 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:41.079 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:41.079 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.079 14:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.340 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.340 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:41.340 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:41.340 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.340 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.911 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.911 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:41.911 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:41.911 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.911 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.171 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.171 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:42.171 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:42.171 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.171 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.432 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.432 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:42.432 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:42.432 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.432 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.693 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.693 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:42.693 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:42.693 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.693 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.958 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.958 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:42.958 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:42.958 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.958 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:43.539 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.539 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:43.539 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:43.539 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.539 14:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:43.799 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.799 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:43.799 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:43.799 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.799 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:44.061 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.061 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:44.061 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:44.061 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.061 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:44.321 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.321 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:44.321 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:44.321 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.321 14:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:44.581 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.581 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:44.581 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:44.581 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.581 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:45.152 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.152 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:45.152 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:45.152 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.152 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:45.414 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.414 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:45.414 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:45.414 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.414 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:45.674 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.674 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:45.674 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:45.674 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.674 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:45.935 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.935 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:45.935 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:45.935 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.935 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.195 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.195 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:46.195 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:46.195 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.195 14:14:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.768 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.768 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:46.768 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:46.768 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.768 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:47.027 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.027 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:47.027 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:47.027 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.027 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:47.288 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.288 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:47.288 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:47.288 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.288 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:47.549 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.549 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:47.549 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:47.549 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.549 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:47.810 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.810 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:47.810 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:47.810 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.810 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:48.382 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.382 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:48.382 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:48.382 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.382 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:48.642 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.642 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:48.642 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:48.642 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.642 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:48.903 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.903 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:48.903 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:48.903 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.903 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.163 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.163 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:49.163 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:49.163 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.163 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.733 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.733 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:49.733 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:49.733 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.733 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.993 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.993 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:49.993 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:49.993 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.993 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.993 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1671554 00:18:50.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1671554) - No such process 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1671554 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:50.254 rmmod nvme_tcp 00:18:50.254 rmmod nvme_fabrics 00:18:50.254 rmmod nvme_keyring 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 1671240 ']' 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 1671240 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1671240 ']' 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1671240 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1671240 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1671240' 00:18:50.254 killing process with pid 1671240 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1671240 00:18:50.254 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1671240 00:18:50.515 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:50.515 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:50.515 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:50.515 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:50.515 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:50.515 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:18:50.515 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:18:50.515 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:50.515 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:50.515 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.515 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:50.515 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.426 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:52.426 00:18:52.426 real 0m21.812s 00:18:52.426 user 0m42.968s 00:18:52.426 sys 0m9.615s 00:18:52.426 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:52.426 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:52.426 ************************************ 00:18:52.426 END TEST nvmf_connect_stress 00:18:52.426 ************************************ 00:18:52.687 14:14:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:52.687 14:14:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:52.687 14:14:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:52.687 14:14:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:52.687 ************************************ 00:18:52.687 START TEST nvmf_fused_ordering 00:18:52.687 ************************************ 00:18:52.687 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:52.687 * Looking for test storage... 00:18:52.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:52.687 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:52.687 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:18:52.687 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:52.687 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:52.687 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:52.687 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:52.687 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:52.687 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:52.687 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:52.687 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:52.687 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:52.687 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:52.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.688 --rc genhtml_branch_coverage=1 00:18:52.688 --rc genhtml_function_coverage=1 00:18:52.688 --rc genhtml_legend=1 00:18:52.688 --rc geninfo_all_blocks=1 00:18:52.688 --rc geninfo_unexecuted_blocks=1 00:18:52.688 00:18:52.688 ' 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:52.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.688 --rc genhtml_branch_coverage=1 00:18:52.688 --rc genhtml_function_coverage=1 00:18:52.688 --rc genhtml_legend=1 00:18:52.688 --rc geninfo_all_blocks=1 00:18:52.688 --rc geninfo_unexecuted_blocks=1 00:18:52.688 00:18:52.688 ' 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:52.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.688 --rc genhtml_branch_coverage=1 00:18:52.688 --rc genhtml_function_coverage=1 00:18:52.688 --rc genhtml_legend=1 00:18:52.688 --rc geninfo_all_blocks=1 00:18:52.688 --rc geninfo_unexecuted_blocks=1 00:18:52.688 00:18:52.688 ' 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:52.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.688 --rc genhtml_branch_coverage=1 00:18:52.688 --rc genhtml_function_coverage=1 00:18:52.688 --rc genhtml_legend=1 00:18:52.688 --rc geninfo_all_blocks=1 00:18:52.688 --rc geninfo_unexecuted_blocks=1 00:18:52.688 00:18:52.688 ' 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:52.688 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.949 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.950 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.950 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:52.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:52.950 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:52.950 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:52.950 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:52.950 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:52.950 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:52.950 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.950 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:52.950 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:52.950 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:52.950 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.950 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:52.950 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.950 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:52.950 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:52.950 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:52.950 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:01.105 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:01.105 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:01.105 Found net devices under 0000:31:00.0: cvl_0_0 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:01.105 Found net devices under 0000:31:00.1: cvl_0_1 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:01.105 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:01.105 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:01.105 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:01.105 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:01.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:19:01.106 00:19:01.106 --- 10.0.0.2 ping statistics --- 00:19:01.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.106 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:01.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:19:01.106 00:19:01.106 --- 10.0.0.1 ping statistics --- 00:19:01.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.106 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=1677905 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 1677905 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1677905 ']' 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.106 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:01.106 [2024-10-13 14:15:04.218796] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:19:01.106 [2024-10-13 14:15:04.218865] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.106 [2024-10-13 14:15:04.361023] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:01.106 [2024-10-13 14:15:04.409192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.106 [2024-10-13 14:15:04.435564] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.106 [2024-10-13 14:15:04.435607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.106 [2024-10-13 14:15:04.435618] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.106 [2024-10-13 14:15:04.435627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.106 [2024-10-13 14:15:04.435633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.106 [2024-10-13 14:15:04.436392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.367 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:01.367 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:19:01.367 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:01.367 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:01.367 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:01.628 [2024-10-13 14:15:05.085631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:01.628 [2024-10-13 14:15:05.109843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:01.628 NULL1 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.628 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:01.628 [2024-10-13 14:15:05.178971] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:19:01.628 [2024-10-13 14:15:05.179015] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1678148 ] 00:19:01.628 [2024-10-13 14:15:05.314792] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:02.200 Attached to nqn.2016-06.io.spdk:cnode1 00:19:02.200 Namespace ID: 1 size: 1GB 00:19:02.200 fused_ordering(0) 00:19:02.200 fused_ordering(1) 00:19:02.200 fused_ordering(2) 00:19:02.200 fused_ordering(3) 00:19:02.200 fused_ordering(4) 00:19:02.200 fused_ordering(5) 00:19:02.200 fused_ordering(6) 00:19:02.200 fused_ordering(7) 00:19:02.200 fused_ordering(8) 00:19:02.200 fused_ordering(9) 00:19:02.200 fused_ordering(10) 00:19:02.200 fused_ordering(11) 00:19:02.200 fused_ordering(12) 00:19:02.200 fused_ordering(13) 00:19:02.200 fused_ordering(14) 00:19:02.200 fused_ordering(15) 00:19:02.200 fused_ordering(16) 00:19:02.200 fused_ordering(17) 00:19:02.200 fused_ordering(18) 00:19:02.200 fused_ordering(19) 00:19:02.200 fused_ordering(20) 00:19:02.200 fused_ordering(21) 00:19:02.200 fused_ordering(22) 00:19:02.200 fused_ordering(23) 00:19:02.200 fused_ordering(24) 00:19:02.200 fused_ordering(25) 00:19:02.200 fused_ordering(26) 00:19:02.200 fused_ordering(27) 00:19:02.200 fused_ordering(28) 00:19:02.200 fused_ordering(29) 00:19:02.200 fused_ordering(30) 00:19:02.200 fused_ordering(31) 00:19:02.200 fused_ordering(32) 00:19:02.200 fused_ordering(33) 00:19:02.200 fused_ordering(34) 00:19:02.200 fused_ordering(35) 00:19:02.200 fused_ordering(36) 00:19:02.200 fused_ordering(37) 00:19:02.200 fused_ordering(38) 00:19:02.200 fused_ordering(39) 00:19:02.200 fused_ordering(40) 00:19:02.200 fused_ordering(41) 00:19:02.200 fused_ordering(42) 00:19:02.200 fused_ordering(43) 00:19:02.200 fused_ordering(44) 00:19:02.200 fused_ordering(45) 00:19:02.200 fused_ordering(46) 00:19:02.200 fused_ordering(47) 00:19:02.200 fused_ordering(48) 00:19:02.200 fused_ordering(49) 00:19:02.200 fused_ordering(50) 00:19:02.200 fused_ordering(51) 00:19:02.200 fused_ordering(52) 00:19:02.200 fused_ordering(53) 00:19:02.200 fused_ordering(54) 00:19:02.200 fused_ordering(55) 00:19:02.200 fused_ordering(56) 00:19:02.200 fused_ordering(57) 00:19:02.201 fused_ordering(58) 00:19:02.201 fused_ordering(59) 00:19:02.201 fused_ordering(60) 00:19:02.201 fused_ordering(61) 00:19:02.201 fused_ordering(62) 00:19:02.201 fused_ordering(63) 00:19:02.201 fused_ordering(64) 00:19:02.201 fused_ordering(65) 00:19:02.201 fused_ordering(66) 00:19:02.201 fused_ordering(67) 00:19:02.201 fused_ordering(68) 00:19:02.201 fused_ordering(69) 00:19:02.201 fused_ordering(70) 00:19:02.201 fused_ordering(71) 00:19:02.201 fused_ordering(72) 00:19:02.201 fused_ordering(73) 00:19:02.201 fused_ordering(74) 00:19:02.201 fused_ordering(75) 00:19:02.201 fused_ordering(76) 00:19:02.201 fused_ordering(77) 00:19:02.201 fused_ordering(78) 00:19:02.201 fused_ordering(79) 00:19:02.201 fused_ordering(80) 00:19:02.201 fused_ordering(81) 00:19:02.201 fused_ordering(82) 00:19:02.201 fused_ordering(83) 00:19:02.201 fused_ordering(84) 00:19:02.201 fused_ordering(85) 00:19:02.201 fused_ordering(86) 00:19:02.201 fused_ordering(87) 00:19:02.201 fused_ordering(88) 00:19:02.201 fused_ordering(89) 00:19:02.201 fused_ordering(90) 00:19:02.201 fused_ordering(91) 00:19:02.201 fused_ordering(92) 00:19:02.201 fused_ordering(93) 00:19:02.201 fused_ordering(94) 00:19:02.201 fused_ordering(95) 00:19:02.201 fused_ordering(96) 00:19:02.201 fused_ordering(97) 00:19:02.201 fused_ordering(98) 00:19:02.201 fused_ordering(99) 00:19:02.201 fused_ordering(100) 00:19:02.201 fused_ordering(101) 00:19:02.201 fused_ordering(102) 00:19:02.201 fused_ordering(103) 00:19:02.201 fused_ordering(104) 00:19:02.201 fused_ordering(105) 00:19:02.201 fused_ordering(106) 00:19:02.201 fused_ordering(107) 00:19:02.201 fused_ordering(108) 00:19:02.201 fused_ordering(109) 00:19:02.201 fused_ordering(110) 00:19:02.201 fused_ordering(111) 00:19:02.201 fused_ordering(112) 00:19:02.201 fused_ordering(113) 00:19:02.201 fused_ordering(114) 00:19:02.201 fused_ordering(115) 00:19:02.201 fused_ordering(116) 00:19:02.201 fused_ordering(117) 00:19:02.201 fused_ordering(118) 00:19:02.201 fused_ordering(119) 00:19:02.201 fused_ordering(120) 00:19:02.201 fused_ordering(121) 00:19:02.201 fused_ordering(122) 00:19:02.201 fused_ordering(123) 00:19:02.201 fused_ordering(124) 00:19:02.201 fused_ordering(125) 00:19:02.201 fused_ordering(126) 00:19:02.201 fused_ordering(127) 00:19:02.201 fused_ordering(128) 00:19:02.201 fused_ordering(129) 00:19:02.201 fused_ordering(130) 00:19:02.201 fused_ordering(131) 00:19:02.201 fused_ordering(132) 00:19:02.201 fused_ordering(133) 00:19:02.201 fused_ordering(134) 00:19:02.201 fused_ordering(135) 00:19:02.201 fused_ordering(136) 00:19:02.201 fused_ordering(137) 00:19:02.201 fused_ordering(138) 00:19:02.201 fused_ordering(139) 00:19:02.201 fused_ordering(140) 00:19:02.201 fused_ordering(141) 00:19:02.201 fused_ordering(142) 00:19:02.201 fused_ordering(143) 00:19:02.201 fused_ordering(144) 00:19:02.201 fused_ordering(145) 00:19:02.201 fused_ordering(146) 00:19:02.201 fused_ordering(147) 00:19:02.201 fused_ordering(148) 00:19:02.201 fused_ordering(149) 00:19:02.201 fused_ordering(150) 00:19:02.201 fused_ordering(151) 00:19:02.201 fused_ordering(152) 00:19:02.201 fused_ordering(153) 00:19:02.201 fused_ordering(154) 00:19:02.201 fused_ordering(155) 00:19:02.201 fused_ordering(156) 00:19:02.201 fused_ordering(157) 00:19:02.201 fused_ordering(158) 00:19:02.201 fused_ordering(159) 00:19:02.201 fused_ordering(160) 00:19:02.201 fused_ordering(161) 00:19:02.201 fused_ordering(162) 00:19:02.201 fused_ordering(163) 00:19:02.201 fused_ordering(164) 00:19:02.201 fused_ordering(165) 00:19:02.201 fused_ordering(166) 00:19:02.201 fused_ordering(167) 00:19:02.201 fused_ordering(168) 00:19:02.201 fused_ordering(169) 00:19:02.201 fused_ordering(170) 00:19:02.201 fused_ordering(171) 00:19:02.201 fused_ordering(172) 00:19:02.201 fused_ordering(173) 00:19:02.201 fused_ordering(174) 00:19:02.201 fused_ordering(175) 00:19:02.201 fused_ordering(176) 00:19:02.201 fused_ordering(177) 00:19:02.201 fused_ordering(178) 00:19:02.201 fused_ordering(179) 00:19:02.201 fused_ordering(180) 00:19:02.201 fused_ordering(181) 00:19:02.201 fused_ordering(182) 00:19:02.201 fused_ordering(183) 00:19:02.201 fused_ordering(184) 00:19:02.201 fused_ordering(185) 00:19:02.201 fused_ordering(186) 00:19:02.201 fused_ordering(187) 00:19:02.201 fused_ordering(188) 00:19:02.201 fused_ordering(189) 00:19:02.201 fused_ordering(190) 00:19:02.201 fused_ordering(191) 00:19:02.201 fused_ordering(192) 00:19:02.201 fused_ordering(193) 00:19:02.201 fused_ordering(194) 00:19:02.201 fused_ordering(195) 00:19:02.201 fused_ordering(196) 00:19:02.201 fused_ordering(197) 00:19:02.201 fused_ordering(198) 00:19:02.201 fused_ordering(199) 00:19:02.201 fused_ordering(200) 00:19:02.201 fused_ordering(201) 00:19:02.201 fused_ordering(202) 00:19:02.201 fused_ordering(203) 00:19:02.201 fused_ordering(204) 00:19:02.201 fused_ordering(205) 00:19:02.530 fused_ordering(206) 00:19:02.530 fused_ordering(207) 00:19:02.530 fused_ordering(208) 00:19:02.530 fused_ordering(209) 00:19:02.530 fused_ordering(210) 00:19:02.530 fused_ordering(211) 00:19:02.530 fused_ordering(212) 00:19:02.530 fused_ordering(213) 00:19:02.530 fused_ordering(214) 00:19:02.530 fused_ordering(215) 00:19:02.531 fused_ordering(216) 00:19:02.531 fused_ordering(217) 00:19:02.531 fused_ordering(218) 00:19:02.531 fused_ordering(219) 00:19:02.531 fused_ordering(220) 00:19:02.531 fused_ordering(221) 00:19:02.531 fused_ordering(222) 00:19:02.531 fused_ordering(223) 00:19:02.531 fused_ordering(224) 00:19:02.531 fused_ordering(225) 00:19:02.531 fused_ordering(226) 00:19:02.531 fused_ordering(227) 00:19:02.531 fused_ordering(228) 00:19:02.531 fused_ordering(229) 00:19:02.531 fused_ordering(230) 00:19:02.531 fused_ordering(231) 00:19:02.531 fused_ordering(232) 00:19:02.531 fused_ordering(233) 00:19:02.531 fused_ordering(234) 00:19:02.531 fused_ordering(235) 00:19:02.531 fused_ordering(236) 00:19:02.531 fused_ordering(237) 00:19:02.531 fused_ordering(238) 00:19:02.531 fused_ordering(239) 00:19:02.531 fused_ordering(240) 00:19:02.531 fused_ordering(241) 00:19:02.531 fused_ordering(242) 00:19:02.531 fused_ordering(243) 00:19:02.531 fused_ordering(244) 00:19:02.531 fused_ordering(245) 00:19:02.531 fused_ordering(246) 00:19:02.531 fused_ordering(247) 00:19:02.531 fused_ordering(248) 00:19:02.531 fused_ordering(249) 00:19:02.531 fused_ordering(250) 00:19:02.531 fused_ordering(251) 00:19:02.531 fused_ordering(252) 00:19:02.531 fused_ordering(253) 00:19:02.531 fused_ordering(254) 00:19:02.531 fused_ordering(255) 00:19:02.531 fused_ordering(256) 00:19:02.531 fused_ordering(257) 00:19:02.531 fused_ordering(258) 00:19:02.531 fused_ordering(259) 00:19:02.531 fused_ordering(260) 00:19:02.531 fused_ordering(261) 00:19:02.531 fused_ordering(262) 00:19:02.531 fused_ordering(263) 00:19:02.531 fused_ordering(264) 00:19:02.531 fused_ordering(265) 00:19:02.531 fused_ordering(266) 00:19:02.531 fused_ordering(267) 00:19:02.531 fused_ordering(268) 00:19:02.531 fused_ordering(269) 00:19:02.531 fused_ordering(270) 00:19:02.531 fused_ordering(271) 00:19:02.531 fused_ordering(272) 00:19:02.531 fused_ordering(273) 00:19:02.531 fused_ordering(274) 00:19:02.531 fused_ordering(275) 00:19:02.531 fused_ordering(276) 00:19:02.531 fused_ordering(277) 00:19:02.531 fused_ordering(278) 00:19:02.531 fused_ordering(279) 00:19:02.531 fused_ordering(280) 00:19:02.531 fused_ordering(281) 00:19:02.531 fused_ordering(282) 00:19:02.531 fused_ordering(283) 00:19:02.531 fused_ordering(284) 00:19:02.531 fused_ordering(285) 00:19:02.531 fused_ordering(286) 00:19:02.531 fused_ordering(287) 00:19:02.531 fused_ordering(288) 00:19:02.531 fused_ordering(289) 00:19:02.531 fused_ordering(290) 00:19:02.531 fused_ordering(291) 00:19:02.531 fused_ordering(292) 00:19:02.531 fused_ordering(293) 00:19:02.531 fused_ordering(294) 00:19:02.531 fused_ordering(295) 00:19:02.531 fused_ordering(296) 00:19:02.531 fused_ordering(297) 00:19:02.531 fused_ordering(298) 00:19:02.531 fused_ordering(299) 00:19:02.531 fused_ordering(300) 00:19:02.531 fused_ordering(301) 00:19:02.531 fused_ordering(302) 00:19:02.531 fused_ordering(303) 00:19:02.531 fused_ordering(304) 00:19:02.531 fused_ordering(305) 00:19:02.531 fused_ordering(306) 00:19:02.531 fused_ordering(307) 00:19:02.531 fused_ordering(308) 00:19:02.531 fused_ordering(309) 00:19:02.531 fused_ordering(310) 00:19:02.531 fused_ordering(311) 00:19:02.531 fused_ordering(312) 00:19:02.531 fused_ordering(313) 00:19:02.531 fused_ordering(314) 00:19:02.531 fused_ordering(315) 00:19:02.531 fused_ordering(316) 00:19:02.531 fused_ordering(317) 00:19:02.531 fused_ordering(318) 00:19:02.531 fused_ordering(319) 00:19:02.531 fused_ordering(320) 00:19:02.531 fused_ordering(321) 00:19:02.531 fused_ordering(322) 00:19:02.531 fused_ordering(323) 00:19:02.531 fused_ordering(324) 00:19:02.531 fused_ordering(325) 00:19:02.531 fused_ordering(326) 00:19:02.531 fused_ordering(327) 00:19:02.531 fused_ordering(328) 00:19:02.531 fused_ordering(329) 00:19:02.531 fused_ordering(330) 00:19:02.531 fused_ordering(331) 00:19:02.531 fused_ordering(332) 00:19:02.531 fused_ordering(333) 00:19:02.531 fused_ordering(334) 00:19:02.531 fused_ordering(335) 00:19:02.531 fused_ordering(336) 00:19:02.531 fused_ordering(337) 00:19:02.531 fused_ordering(338) 00:19:02.531 fused_ordering(339) 00:19:02.531 fused_ordering(340) 00:19:02.531 fused_ordering(341) 00:19:02.531 fused_ordering(342) 00:19:02.531 fused_ordering(343) 00:19:02.531 fused_ordering(344) 00:19:02.531 fused_ordering(345) 00:19:02.531 fused_ordering(346) 00:19:02.531 fused_ordering(347) 00:19:02.531 fused_ordering(348) 00:19:02.531 fused_ordering(349) 00:19:02.531 fused_ordering(350) 00:19:02.531 fused_ordering(351) 00:19:02.531 fused_ordering(352) 00:19:02.531 fused_ordering(353) 00:19:02.531 fused_ordering(354) 00:19:02.531 fused_ordering(355) 00:19:02.531 fused_ordering(356) 00:19:02.531 fused_ordering(357) 00:19:02.531 fused_ordering(358) 00:19:02.531 fused_ordering(359) 00:19:02.531 fused_ordering(360) 00:19:02.531 fused_ordering(361) 00:19:02.531 fused_ordering(362) 00:19:02.531 fused_ordering(363) 00:19:02.531 fused_ordering(364) 00:19:02.531 fused_ordering(365) 00:19:02.531 fused_ordering(366) 00:19:02.531 fused_ordering(367) 00:19:02.531 fused_ordering(368) 00:19:02.531 fused_ordering(369) 00:19:02.531 fused_ordering(370) 00:19:02.531 fused_ordering(371) 00:19:02.531 fused_ordering(372) 00:19:02.531 fused_ordering(373) 00:19:02.531 fused_ordering(374) 00:19:02.531 fused_ordering(375) 00:19:02.531 fused_ordering(376) 00:19:02.531 fused_ordering(377) 00:19:02.531 fused_ordering(378) 00:19:02.531 fused_ordering(379) 00:19:02.531 fused_ordering(380) 00:19:02.531 fused_ordering(381) 00:19:02.531 fused_ordering(382) 00:19:02.531 fused_ordering(383) 00:19:02.531 fused_ordering(384) 00:19:02.531 fused_ordering(385) 00:19:02.531 fused_ordering(386) 00:19:02.531 fused_ordering(387) 00:19:02.531 fused_ordering(388) 00:19:02.531 fused_ordering(389) 00:19:02.531 fused_ordering(390) 00:19:02.531 fused_ordering(391) 00:19:02.531 fused_ordering(392) 00:19:02.531 fused_ordering(393) 00:19:02.532 fused_ordering(394) 00:19:02.532 fused_ordering(395) 00:19:02.532 fused_ordering(396) 00:19:02.532 fused_ordering(397) 00:19:02.532 fused_ordering(398) 00:19:02.532 fused_ordering(399) 00:19:02.532 fused_ordering(400) 00:19:02.532 fused_ordering(401) 00:19:02.532 fused_ordering(402) 00:19:02.532 fused_ordering(403) 00:19:02.532 fused_ordering(404) 00:19:02.532 fused_ordering(405) 00:19:02.532 fused_ordering(406) 00:19:02.532 fused_ordering(407) 00:19:02.532 fused_ordering(408) 00:19:02.532 fused_ordering(409) 00:19:02.532 fused_ordering(410) 00:19:02.845 fused_ordering(411) 00:19:02.845 fused_ordering(412) 00:19:02.845 fused_ordering(413) 00:19:02.845 fused_ordering(414) 00:19:02.845 fused_ordering(415) 00:19:02.845 fused_ordering(416) 00:19:02.845 fused_ordering(417) 00:19:02.845 fused_ordering(418) 00:19:02.845 fused_ordering(419) 00:19:02.845 fused_ordering(420) 00:19:02.845 fused_ordering(421) 00:19:02.845 fused_ordering(422) 00:19:02.845 fused_ordering(423) 00:19:02.845 fused_ordering(424) 00:19:02.845 fused_ordering(425) 00:19:02.845 fused_ordering(426) 00:19:02.845 fused_ordering(427) 00:19:02.845 fused_ordering(428) 00:19:02.845 fused_ordering(429) 00:19:02.845 fused_ordering(430) 00:19:02.845 fused_ordering(431) 00:19:02.845 fused_ordering(432) 00:19:02.845 fused_ordering(433) 00:19:02.845 fused_ordering(434) 00:19:02.845 fused_ordering(435) 00:19:02.845 fused_ordering(436) 00:19:02.845 fused_ordering(437) 00:19:02.845 fused_ordering(438) 00:19:02.845 fused_ordering(439) 00:19:02.845 fused_ordering(440) 00:19:02.845 fused_ordering(441) 00:19:02.845 fused_ordering(442) 00:19:02.845 fused_ordering(443) 00:19:02.845 fused_ordering(444) 00:19:02.845 fused_ordering(445) 00:19:02.845 fused_ordering(446) 00:19:02.845 fused_ordering(447) 00:19:02.845 fused_ordering(448) 00:19:02.845 fused_ordering(449) 00:19:02.845 fused_ordering(450) 00:19:02.845 fused_ordering(451) 00:19:02.845 fused_ordering(452) 00:19:02.845 fused_ordering(453) 00:19:02.845 fused_ordering(454) 00:19:02.845 fused_ordering(455) 00:19:02.845 fused_ordering(456) 00:19:02.845 fused_ordering(457) 00:19:02.845 fused_ordering(458) 00:19:02.845 fused_ordering(459) 00:19:02.845 fused_ordering(460) 00:19:02.845 fused_ordering(461) 00:19:02.845 fused_ordering(462) 00:19:02.845 fused_ordering(463) 00:19:02.845 fused_ordering(464) 00:19:02.845 fused_ordering(465) 00:19:02.845 fused_ordering(466) 00:19:02.845 fused_ordering(467) 00:19:02.845 fused_ordering(468) 00:19:02.845 fused_ordering(469) 00:19:02.845 fused_ordering(470) 00:19:02.845 fused_ordering(471) 00:19:02.845 fused_ordering(472) 00:19:02.845 fused_ordering(473) 00:19:02.845 fused_ordering(474) 00:19:02.845 fused_ordering(475) 00:19:02.845 fused_ordering(476) 00:19:02.845 fused_ordering(477) 00:19:02.845 fused_ordering(478) 00:19:02.845 fused_ordering(479) 00:19:02.845 fused_ordering(480) 00:19:02.845 fused_ordering(481) 00:19:02.845 fused_ordering(482) 00:19:02.845 fused_ordering(483) 00:19:02.845 fused_ordering(484) 00:19:02.845 fused_ordering(485) 00:19:02.845 fused_ordering(486) 00:19:02.845 fused_ordering(487) 00:19:02.845 fused_ordering(488) 00:19:02.845 fused_ordering(489) 00:19:02.845 fused_ordering(490) 00:19:02.845 fused_ordering(491) 00:19:02.845 fused_ordering(492) 00:19:02.845 fused_ordering(493) 00:19:02.845 fused_ordering(494) 00:19:02.845 fused_ordering(495) 00:19:02.845 fused_ordering(496) 00:19:02.845 fused_ordering(497) 00:19:02.845 fused_ordering(498) 00:19:02.845 fused_ordering(499) 00:19:02.845 fused_ordering(500) 00:19:02.845 fused_ordering(501) 00:19:02.845 fused_ordering(502) 00:19:02.845 fused_ordering(503) 00:19:02.845 fused_ordering(504) 00:19:02.845 fused_ordering(505) 00:19:02.845 fused_ordering(506) 00:19:02.845 fused_ordering(507) 00:19:02.845 fused_ordering(508) 00:19:02.845 fused_ordering(509) 00:19:02.845 fused_ordering(510) 00:19:02.845 fused_ordering(511) 00:19:02.845 fused_ordering(512) 00:19:02.845 fused_ordering(513) 00:19:02.845 fused_ordering(514) 00:19:02.845 fused_ordering(515) 00:19:02.845 fused_ordering(516) 00:19:02.845 fused_ordering(517) 00:19:02.845 fused_ordering(518) 00:19:02.845 fused_ordering(519) 00:19:02.845 fused_ordering(520) 00:19:02.845 fused_ordering(521) 00:19:02.845 fused_ordering(522) 00:19:02.845 fused_ordering(523) 00:19:02.845 fused_ordering(524) 00:19:02.845 fused_ordering(525) 00:19:02.845 fused_ordering(526) 00:19:02.845 fused_ordering(527) 00:19:02.845 fused_ordering(528) 00:19:02.845 fused_ordering(529) 00:19:02.845 fused_ordering(530) 00:19:02.846 fused_ordering(531) 00:19:02.846 fused_ordering(532) 00:19:02.846 fused_ordering(533) 00:19:02.846 fused_ordering(534) 00:19:02.846 fused_ordering(535) 00:19:02.846 fused_ordering(536) 00:19:02.846 fused_ordering(537) 00:19:02.846 fused_ordering(538) 00:19:02.846 fused_ordering(539) 00:19:02.846 fused_ordering(540) 00:19:02.846 fused_ordering(541) 00:19:02.846 fused_ordering(542) 00:19:02.846 fused_ordering(543) 00:19:02.846 fused_ordering(544) 00:19:02.846 fused_ordering(545) 00:19:02.846 fused_ordering(546) 00:19:02.846 fused_ordering(547) 00:19:02.846 fused_ordering(548) 00:19:02.846 fused_ordering(549) 00:19:02.846 fused_ordering(550) 00:19:02.846 fused_ordering(551) 00:19:02.846 fused_ordering(552) 00:19:02.846 fused_ordering(553) 00:19:02.846 fused_ordering(554) 00:19:02.846 fused_ordering(555) 00:19:02.846 fused_ordering(556) 00:19:02.846 fused_ordering(557) 00:19:02.846 fused_ordering(558) 00:19:02.846 fused_ordering(559) 00:19:02.846 fused_ordering(560) 00:19:02.846 fused_ordering(561) 00:19:02.846 fused_ordering(562) 00:19:02.846 fused_ordering(563) 00:19:02.846 fused_ordering(564) 00:19:02.846 fused_ordering(565) 00:19:02.846 fused_ordering(566) 00:19:02.846 fused_ordering(567) 00:19:02.846 fused_ordering(568) 00:19:02.846 fused_ordering(569) 00:19:02.846 fused_ordering(570) 00:19:02.846 fused_ordering(571) 00:19:02.846 fused_ordering(572) 00:19:02.846 fused_ordering(573) 00:19:02.846 fused_ordering(574) 00:19:02.846 fused_ordering(575) 00:19:02.846 fused_ordering(576) 00:19:02.846 fused_ordering(577) 00:19:02.846 fused_ordering(578) 00:19:02.846 fused_ordering(579) 00:19:02.846 fused_ordering(580) 00:19:02.846 fused_ordering(581) 00:19:02.846 fused_ordering(582) 00:19:02.846 fused_ordering(583) 00:19:02.846 fused_ordering(584) 00:19:02.846 fused_ordering(585) 00:19:02.846 fused_ordering(586) 00:19:02.846 fused_ordering(587) 00:19:02.846 fused_ordering(588) 00:19:02.846 fused_ordering(589) 00:19:02.846 fused_ordering(590) 00:19:02.846 fused_ordering(591) 00:19:02.846 fused_ordering(592) 00:19:02.846 fused_ordering(593) 00:19:02.846 fused_ordering(594) 00:19:02.846 fused_ordering(595) 00:19:02.846 fused_ordering(596) 00:19:02.846 fused_ordering(597) 00:19:02.846 fused_ordering(598) 00:19:02.846 fused_ordering(599) 00:19:02.846 fused_ordering(600) 00:19:02.846 fused_ordering(601) 00:19:02.846 fused_ordering(602) 00:19:02.846 fused_ordering(603) 00:19:02.846 fused_ordering(604) 00:19:02.846 fused_ordering(605) 00:19:02.846 fused_ordering(606) 00:19:02.846 fused_ordering(607) 00:19:02.846 fused_ordering(608) 00:19:02.846 fused_ordering(609) 00:19:02.846 fused_ordering(610) 00:19:02.846 fused_ordering(611) 00:19:02.846 fused_ordering(612) 00:19:02.846 fused_ordering(613) 00:19:02.846 fused_ordering(614) 00:19:02.846 fused_ordering(615) 00:19:03.448 fused_ordering(616) 00:19:03.448 fused_ordering(617) 00:19:03.448 fused_ordering(618) 00:19:03.448 fused_ordering(619) 00:19:03.448 fused_ordering(620) 00:19:03.448 fused_ordering(621) 00:19:03.448 fused_ordering(622) 00:19:03.448 fused_ordering(623) 00:19:03.448 fused_ordering(624) 00:19:03.448 fused_ordering(625) 00:19:03.448 fused_ordering(626) 00:19:03.448 fused_ordering(627) 00:19:03.448 fused_ordering(628) 00:19:03.448 fused_ordering(629) 00:19:03.448 fused_ordering(630) 00:19:03.448 fused_ordering(631) 00:19:03.448 fused_ordering(632) 00:19:03.448 fused_ordering(633) 00:19:03.448 fused_ordering(634) 00:19:03.448 fused_ordering(635) 00:19:03.448 fused_ordering(636) 00:19:03.448 fused_ordering(637) 00:19:03.448 fused_ordering(638) 00:19:03.448 fused_ordering(639) 00:19:03.448 fused_ordering(640) 00:19:03.448 fused_ordering(641) 00:19:03.448 fused_ordering(642) 00:19:03.448 fused_ordering(643) 00:19:03.448 fused_ordering(644) 00:19:03.448 fused_ordering(645) 00:19:03.448 fused_ordering(646) 00:19:03.448 fused_ordering(647) 00:19:03.448 fused_ordering(648) 00:19:03.448 fused_ordering(649) 00:19:03.448 fused_ordering(650) 00:19:03.448 fused_ordering(651) 00:19:03.448 fused_ordering(652) 00:19:03.448 fused_ordering(653) 00:19:03.448 fused_ordering(654) 00:19:03.448 fused_ordering(655) 00:19:03.448 fused_ordering(656) 00:19:03.448 fused_ordering(657) 00:19:03.448 fused_ordering(658) 00:19:03.448 fused_ordering(659) 00:19:03.448 fused_ordering(660) 00:19:03.448 fused_ordering(661) 00:19:03.448 fused_ordering(662) 00:19:03.448 fused_ordering(663) 00:19:03.448 fused_ordering(664) 00:19:03.448 fused_ordering(665) 00:19:03.448 fused_ordering(666) 00:19:03.448 fused_ordering(667) 00:19:03.448 fused_ordering(668) 00:19:03.448 fused_ordering(669) 00:19:03.448 fused_ordering(670) 00:19:03.448 fused_ordering(671) 00:19:03.448 fused_ordering(672) 00:19:03.448 fused_ordering(673) 00:19:03.448 fused_ordering(674) 00:19:03.448 fused_ordering(675) 00:19:03.448 fused_ordering(676) 00:19:03.448 fused_ordering(677) 00:19:03.448 fused_ordering(678) 00:19:03.448 fused_ordering(679) 00:19:03.448 fused_ordering(680) 00:19:03.448 fused_ordering(681) 00:19:03.448 fused_ordering(682) 00:19:03.448 fused_ordering(683) 00:19:03.448 fused_ordering(684) 00:19:03.448 fused_ordering(685) 00:19:03.448 fused_ordering(686) 00:19:03.448 fused_ordering(687) 00:19:03.448 fused_ordering(688) 00:19:03.448 fused_ordering(689) 00:19:03.448 fused_ordering(690) 00:19:03.448 fused_ordering(691) 00:19:03.448 fused_ordering(692) 00:19:03.448 fused_ordering(693) 00:19:03.448 fused_ordering(694) 00:19:03.448 fused_ordering(695) 00:19:03.448 fused_ordering(696) 00:19:03.448 fused_ordering(697) 00:19:03.448 fused_ordering(698) 00:19:03.448 fused_ordering(699) 00:19:03.448 fused_ordering(700) 00:19:03.448 fused_ordering(701) 00:19:03.448 fused_ordering(702) 00:19:03.448 fused_ordering(703) 00:19:03.448 fused_ordering(704) 00:19:03.448 fused_ordering(705) 00:19:03.448 fused_ordering(706) 00:19:03.448 fused_ordering(707) 00:19:03.448 fused_ordering(708) 00:19:03.449 fused_ordering(709) 00:19:03.449 fused_ordering(710) 00:19:03.449 fused_ordering(711) 00:19:03.449 fused_ordering(712) 00:19:03.449 fused_ordering(713) 00:19:03.449 fused_ordering(714) 00:19:03.449 fused_ordering(715) 00:19:03.449 fused_ordering(716) 00:19:03.449 fused_ordering(717) 00:19:03.449 fused_ordering(718) 00:19:03.449 fused_ordering(719) 00:19:03.449 fused_ordering(720) 00:19:03.449 fused_ordering(721) 00:19:03.449 fused_ordering(722) 00:19:03.449 fused_ordering(723) 00:19:03.449 fused_ordering(724) 00:19:03.449 fused_ordering(725) 00:19:03.449 fused_ordering(726) 00:19:03.449 fused_ordering(727) 00:19:03.449 fused_ordering(728) 00:19:03.449 fused_ordering(729) 00:19:03.449 fused_ordering(730) 00:19:03.449 fused_ordering(731) 00:19:03.449 fused_ordering(732) 00:19:03.449 fused_ordering(733) 00:19:03.449 fused_ordering(734) 00:19:03.449 fused_ordering(735) 00:19:03.449 fused_ordering(736) 00:19:03.449 fused_ordering(737) 00:19:03.449 fused_ordering(738) 00:19:03.449 fused_ordering(739) 00:19:03.449 fused_ordering(740) 00:19:03.449 fused_ordering(741) 00:19:03.449 fused_ordering(742) 00:19:03.449 fused_ordering(743) 00:19:03.449 fused_ordering(744) 00:19:03.449 fused_ordering(745) 00:19:03.449 fused_ordering(746) 00:19:03.449 fused_ordering(747) 00:19:03.449 fused_ordering(748) 00:19:03.449 fused_ordering(749) 00:19:03.449 fused_ordering(750) 00:19:03.449 fused_ordering(751) 00:19:03.449 fused_ordering(752) 00:19:03.449 fused_ordering(753) 00:19:03.449 fused_ordering(754) 00:19:03.449 fused_ordering(755) 00:19:03.449 fused_ordering(756) 00:19:03.449 fused_ordering(757) 00:19:03.449 fused_ordering(758) 00:19:03.449 fused_ordering(759) 00:19:03.449 fused_ordering(760) 00:19:03.449 fused_ordering(761) 00:19:03.449 fused_ordering(762) 00:19:03.449 fused_ordering(763) 00:19:03.449 fused_ordering(764) 00:19:03.449 fused_ordering(765) 00:19:03.449 fused_ordering(766) 00:19:03.449 fused_ordering(767) 00:19:03.449 fused_ordering(768) 00:19:03.449 fused_ordering(769) 00:19:03.449 fused_ordering(770) 00:19:03.449 fused_ordering(771) 00:19:03.449 fused_ordering(772) 00:19:03.449 fused_ordering(773) 00:19:03.449 fused_ordering(774) 00:19:03.449 fused_ordering(775) 00:19:03.449 fused_ordering(776) 00:19:03.449 fused_ordering(777) 00:19:03.449 fused_ordering(778) 00:19:03.449 fused_ordering(779) 00:19:03.449 fused_ordering(780) 00:19:03.449 fused_ordering(781) 00:19:03.449 fused_ordering(782) 00:19:03.449 fused_ordering(783) 00:19:03.449 fused_ordering(784) 00:19:03.449 fused_ordering(785) 00:19:03.449 fused_ordering(786) 00:19:03.449 fused_ordering(787) 00:19:03.449 fused_ordering(788) 00:19:03.449 fused_ordering(789) 00:19:03.449 fused_ordering(790) 00:19:03.449 fused_ordering(791) 00:19:03.449 fused_ordering(792) 00:19:03.449 fused_ordering(793) 00:19:03.449 fused_ordering(794) 00:19:03.449 fused_ordering(795) 00:19:03.449 fused_ordering(796) 00:19:03.449 fused_ordering(797) 00:19:03.449 fused_ordering(798) 00:19:03.449 fused_ordering(799) 00:19:03.449 fused_ordering(800) 00:19:03.449 fused_ordering(801) 00:19:03.449 fused_ordering(802) 00:19:03.449 fused_ordering(803) 00:19:03.449 fused_ordering(804) 00:19:03.449 fused_ordering(805) 00:19:03.449 fused_ordering(806) 00:19:03.449 fused_ordering(807) 00:19:03.449 fused_ordering(808) 00:19:03.449 fused_ordering(809) 00:19:03.449 fused_ordering(810) 00:19:03.449 fused_ordering(811) 00:19:03.449 fused_ordering(812) 00:19:03.449 fused_ordering(813) 00:19:03.449 fused_ordering(814) 00:19:03.449 fused_ordering(815) 00:19:03.449 fused_ordering(816) 00:19:03.449 fused_ordering(817) 00:19:03.449 fused_ordering(818) 00:19:03.449 fused_ordering(819) 00:19:03.449 fused_ordering(820) 00:19:04.020 fused_ordering(821) 00:19:04.020 fused_ordering(822) 00:19:04.020 fused_ordering(823) 00:19:04.020 fused_ordering(824) 00:19:04.020 fused_ordering(825) 00:19:04.020 fused_ordering(826) 00:19:04.020 fused_ordering(827) 00:19:04.020 fused_ordering(828) 00:19:04.020 fused_ordering(829) 00:19:04.020 fused_ordering(830) 00:19:04.020 fused_ordering(831) 00:19:04.020 fused_ordering(832) 00:19:04.020 fused_ordering(833) 00:19:04.020 fused_ordering(834) 00:19:04.020 fused_ordering(835) 00:19:04.020 fused_ordering(836) 00:19:04.020 fused_ordering(837) 00:19:04.020 fused_ordering(838) 00:19:04.020 fused_ordering(839) 00:19:04.020 fused_ordering(840) 00:19:04.020 fused_ordering(841) 00:19:04.020 fused_ordering(842) 00:19:04.020 fused_ordering(843) 00:19:04.020 fused_ordering(844) 00:19:04.020 fused_ordering(845) 00:19:04.020 fused_ordering(846) 00:19:04.020 fused_ordering(847) 00:19:04.020 fused_ordering(848) 00:19:04.020 fused_ordering(849) 00:19:04.020 fused_ordering(850) 00:19:04.020 fused_ordering(851) 00:19:04.020 fused_ordering(852) 00:19:04.020 fused_ordering(853) 00:19:04.020 fused_ordering(854) 00:19:04.020 fused_ordering(855) 00:19:04.020 fused_ordering(856) 00:19:04.020 fused_ordering(857) 00:19:04.020 fused_ordering(858) 00:19:04.020 fused_ordering(859) 00:19:04.020 fused_ordering(860) 00:19:04.020 fused_ordering(861) 00:19:04.020 fused_ordering(862) 00:19:04.020 fused_ordering(863) 00:19:04.020 fused_ordering(864) 00:19:04.020 fused_ordering(865) 00:19:04.020 fused_ordering(866) 00:19:04.020 fused_ordering(867) 00:19:04.020 fused_ordering(868) 00:19:04.020 fused_ordering(869) 00:19:04.020 fused_ordering(870) 00:19:04.020 fused_ordering(871) 00:19:04.020 fused_ordering(872) 00:19:04.020 fused_ordering(873) 00:19:04.020 fused_ordering(874) 00:19:04.020 fused_ordering(875) 00:19:04.020 fused_ordering(876) 00:19:04.020 fused_ordering(877) 00:19:04.020 fused_ordering(878) 00:19:04.020 fused_ordering(879) 00:19:04.020 fused_ordering(880) 00:19:04.020 fused_ordering(881) 00:19:04.020 fused_ordering(882) 00:19:04.020 fused_ordering(883) 00:19:04.020 fused_ordering(884) 00:19:04.020 fused_ordering(885) 00:19:04.020 fused_ordering(886) 00:19:04.020 fused_ordering(887) 00:19:04.020 fused_ordering(888) 00:19:04.020 fused_ordering(889) 00:19:04.020 fused_ordering(890) 00:19:04.020 fused_ordering(891) 00:19:04.020 fused_ordering(892) 00:19:04.020 fused_ordering(893) 00:19:04.020 fused_ordering(894) 00:19:04.020 fused_ordering(895) 00:19:04.020 fused_ordering(896) 00:19:04.020 fused_ordering(897) 00:19:04.020 fused_ordering(898) 00:19:04.020 fused_ordering(899) 00:19:04.020 fused_ordering(900) 00:19:04.021 fused_ordering(901) 00:19:04.021 fused_ordering(902) 00:19:04.021 fused_ordering(903) 00:19:04.021 fused_ordering(904) 00:19:04.021 fused_ordering(905) 00:19:04.021 fused_ordering(906) 00:19:04.021 fused_ordering(907) 00:19:04.021 fused_ordering(908) 00:19:04.021 fused_ordering(909) 00:19:04.021 fused_ordering(910) 00:19:04.021 fused_ordering(911) 00:19:04.021 fused_ordering(912) 00:19:04.021 fused_ordering(913) 00:19:04.021 fused_ordering(914) 00:19:04.021 fused_ordering(915) 00:19:04.021 fused_ordering(916) 00:19:04.021 fused_ordering(917) 00:19:04.021 fused_ordering(918) 00:19:04.021 fused_ordering(919) 00:19:04.021 fused_ordering(920) 00:19:04.021 fused_ordering(921) 00:19:04.021 fused_ordering(922) 00:19:04.021 fused_ordering(923) 00:19:04.021 fused_ordering(924) 00:19:04.021 fused_ordering(925) 00:19:04.021 fused_ordering(926) 00:19:04.021 fused_ordering(927) 00:19:04.021 fused_ordering(928) 00:19:04.021 fused_ordering(929) 00:19:04.021 fused_ordering(930) 00:19:04.021 fused_ordering(931) 00:19:04.021 fused_ordering(932) 00:19:04.021 fused_ordering(933) 00:19:04.021 fused_ordering(934) 00:19:04.021 fused_ordering(935) 00:19:04.021 fused_ordering(936) 00:19:04.021 fused_ordering(937) 00:19:04.021 fused_ordering(938) 00:19:04.021 fused_ordering(939) 00:19:04.021 fused_ordering(940) 00:19:04.021 fused_ordering(941) 00:19:04.021 fused_ordering(942) 00:19:04.021 fused_ordering(943) 00:19:04.021 fused_ordering(944) 00:19:04.021 fused_ordering(945) 00:19:04.021 fused_ordering(946) 00:19:04.021 fused_ordering(947) 00:19:04.021 fused_ordering(948) 00:19:04.021 fused_ordering(949) 00:19:04.021 fused_ordering(950) 00:19:04.021 fused_ordering(951) 00:19:04.021 fused_ordering(952) 00:19:04.021 fused_ordering(953) 00:19:04.021 fused_ordering(954) 00:19:04.021 fused_ordering(955) 00:19:04.021 fused_ordering(956) 00:19:04.021 fused_ordering(957) 00:19:04.021 fused_ordering(958) 00:19:04.021 fused_ordering(959) 00:19:04.021 fused_ordering(960) 00:19:04.021 fused_ordering(961) 00:19:04.021 fused_ordering(962) 00:19:04.021 fused_ordering(963) 00:19:04.021 fused_ordering(964) 00:19:04.021 fused_ordering(965) 00:19:04.021 fused_ordering(966) 00:19:04.021 fused_ordering(967) 00:19:04.021 fused_ordering(968) 00:19:04.021 fused_ordering(969) 00:19:04.021 fused_ordering(970) 00:19:04.021 fused_ordering(971) 00:19:04.021 fused_ordering(972) 00:19:04.021 fused_ordering(973) 00:19:04.021 fused_ordering(974) 00:19:04.021 fused_ordering(975) 00:19:04.021 fused_ordering(976) 00:19:04.021 fused_ordering(977) 00:19:04.021 fused_ordering(978) 00:19:04.021 fused_ordering(979) 00:19:04.021 fused_ordering(980) 00:19:04.021 fused_ordering(981) 00:19:04.021 fused_ordering(982) 00:19:04.021 fused_ordering(983) 00:19:04.021 fused_ordering(984) 00:19:04.021 fused_ordering(985) 00:19:04.021 fused_ordering(986) 00:19:04.021 fused_ordering(987) 00:19:04.021 fused_ordering(988) 00:19:04.021 fused_ordering(989) 00:19:04.021 fused_ordering(990) 00:19:04.021 fused_ordering(991) 00:19:04.021 fused_ordering(992) 00:19:04.021 fused_ordering(993) 00:19:04.021 fused_ordering(994) 00:19:04.021 fused_ordering(995) 00:19:04.021 fused_ordering(996) 00:19:04.021 fused_ordering(997) 00:19:04.021 fused_ordering(998) 00:19:04.021 fused_ordering(999) 00:19:04.021 fused_ordering(1000) 00:19:04.021 fused_ordering(1001) 00:19:04.021 fused_ordering(1002) 00:19:04.021 fused_ordering(1003) 00:19:04.021 fused_ordering(1004) 00:19:04.021 fused_ordering(1005) 00:19:04.021 fused_ordering(1006) 00:19:04.021 fused_ordering(1007) 00:19:04.021 fused_ordering(1008) 00:19:04.021 fused_ordering(1009) 00:19:04.021 fused_ordering(1010) 00:19:04.021 fused_ordering(1011) 00:19:04.021 fused_ordering(1012) 00:19:04.021 fused_ordering(1013) 00:19:04.021 fused_ordering(1014) 00:19:04.021 fused_ordering(1015) 00:19:04.021 fused_ordering(1016) 00:19:04.021 fused_ordering(1017) 00:19:04.021 fused_ordering(1018) 00:19:04.021 fused_ordering(1019) 00:19:04.021 fused_ordering(1020) 00:19:04.021 fused_ordering(1021) 00:19:04.021 fused_ordering(1022) 00:19:04.021 fused_ordering(1023) 00:19:04.021 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:19:04.021 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:19:04.021 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:04.021 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:19:04.021 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:04.021 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:19:04.021 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:04.021 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:04.021 rmmod nvme_tcp 00:19:04.021 rmmod nvme_fabrics 00:19:04.021 rmmod nvme_keyring 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 1677905 ']' 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 1677905 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1677905 ']' 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1677905 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1677905 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1677905' 00:19:04.282 killing process with pid 1677905 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1677905 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1677905 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:04.282 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:06.826 00:19:06.826 real 0m13.852s 00:19:06.826 user 0m7.293s 00:19:06.826 sys 0m7.305s 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:06.826 ************************************ 00:19:06.826 END TEST nvmf_fused_ordering 00:19:06.826 ************************************ 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:06.826 ************************************ 00:19:06.826 START TEST nvmf_ns_masking 00:19:06.826 ************************************ 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:19:06.826 * Looking for test storage... 00:19:06.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:06.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.826 --rc genhtml_branch_coverage=1 00:19:06.826 --rc genhtml_function_coverage=1 00:19:06.826 --rc genhtml_legend=1 00:19:06.826 --rc geninfo_all_blocks=1 00:19:06.826 --rc geninfo_unexecuted_blocks=1 00:19:06.826 00:19:06.826 ' 00:19:06.826 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:06.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.826 --rc genhtml_branch_coverage=1 00:19:06.826 --rc genhtml_function_coverage=1 00:19:06.826 --rc genhtml_legend=1 00:19:06.827 --rc geninfo_all_blocks=1 00:19:06.827 --rc geninfo_unexecuted_blocks=1 00:19:06.827 00:19:06.827 ' 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:06.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.827 --rc genhtml_branch_coverage=1 00:19:06.827 --rc genhtml_function_coverage=1 00:19:06.827 --rc genhtml_legend=1 00:19:06.827 --rc geninfo_all_blocks=1 00:19:06.827 --rc geninfo_unexecuted_blocks=1 00:19:06.827 00:19:06.827 ' 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:06.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.827 --rc genhtml_branch_coverage=1 00:19:06.827 --rc genhtml_function_coverage=1 00:19:06.827 --rc genhtml_legend=1 00:19:06.827 --rc geninfo_all_blocks=1 00:19:06.827 --rc geninfo_unexecuted_blocks=1 00:19:06.827 00:19:06.827 ' 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:06.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=bcc8acc3-27f6-4473-9601-87f0b7527470 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=fea5e2a3-28a1-417c-8006-a7d0a7c2cb22 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=efc317eb-0129-4dca-bcdd-b0ae744bb4f2 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:19:06.827 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:14.971 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:14.971 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:14.971 Found net devices under 0000:31:00.0: cvl_0_0 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:14.971 Found net devices under 0000:31:00.1: cvl_0_1 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.971 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:14.972 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:14.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:19:14.972 00:19:14.972 --- 10.0.0.2 ping statistics --- 00:19:14.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.972 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:19:14.972 00:19:14.972 --- 10.0.0.1 ping statistics --- 00:19:14.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.972 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=1683334 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 1683334 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1683334 ']' 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:14.972 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:14.972 [2024-10-13 14:15:18.149276] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:19:14.972 [2024-10-13 14:15:18.149337] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.972 [2024-10-13 14:15:18.293109] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:14.972 [2024-10-13 14:15:18.334014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.972 [2024-10-13 14:15:18.360179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.972 [2024-10-13 14:15:18.360219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.972 [2024-10-13 14:15:18.360227] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.972 [2024-10-13 14:15:18.360234] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.972 [2024-10-13 14:15:18.360241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.972 [2024-10-13 14:15:18.360930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.544 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:15.544 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:19:15.544 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:15.544 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:15.544 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:15.544 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.544 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:15.544 [2024-10-13 14:15:19.173753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.544 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:19:15.544 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:19:15.544 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:15.804 Malloc1 00:19:15.804 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:16.065 Malloc2 00:19:16.065 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:16.325 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:19:16.325 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:16.587 [2024-10-13 14:15:20.131616] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.587 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:19:16.587 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I efc317eb-0129-4dca-bcdd-b0ae744bb4f2 -a 10.0.0.2 -s 4420 -i 4 00:19:16.587 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:19:16.587 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:19:16.587 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:16.587 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:16.587 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:19.137 [ 0]:0x1 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a29e8276895433f8ccce2513c789b66 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a29e8276895433f8ccce2513c789b66 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:19.137 [ 0]:0x1 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a29e8276895433f8ccce2513c789b66 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a29e8276895433f8ccce2513c789b66 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:19.137 [ 1]:0x2 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd14ac4c561f4b8bb54aaaeec9c84f02 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd14ac4c561f4b8bb54aaaeec9c84f02 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:19:19.137 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:19.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:19.398 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:19.659 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:19.659 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:19:19.659 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I efc317eb-0129-4dca-bcdd-b0ae744bb4f2 -a 10.0.0.2 -s 4420 -i 4 00:19:19.920 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:19.920 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:19:19.920 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:19.920 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:19:19.920 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:19:19.920 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:21.833 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:22.093 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:22.093 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:22.093 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:22.093 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:22.093 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:22.093 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:22.094 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:22.094 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:22.094 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:22.094 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:22.094 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:22.094 [ 0]:0x2 00:19:22.094 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:22.094 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:22.094 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd14ac4c561f4b8bb54aaaeec9c84f02 00:19:22.094 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd14ac4c561f4b8bb54aaaeec9c84f02 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:22.094 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:22.354 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:22.354 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:22.354 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:22.354 [ 0]:0x1 00:19:22.354 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:22.354 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:22.354 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a29e8276895433f8ccce2513c789b66 00:19:22.354 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a29e8276895433f8ccce2513c789b66 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:22.354 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:22.354 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:22.354 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:22.354 [ 1]:0x2 00:19:22.354 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:22.354 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:22.354 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd14ac4c561f4b8bb54aaaeec9c84f02 00:19:22.354 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd14ac4c561f4b8bb54aaaeec9c84f02 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:22.354 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:22.615 [ 0]:0x2 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd14ac4c561f4b8bb54aaaeec9c84f02 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd14ac4c561f4b8bb54aaaeec9c84f02 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:22.615 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:22.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:22.876 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:22.876 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:22.876 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I efc317eb-0129-4dca-bcdd-b0ae744bb4f2 -a 10.0.0.2 -s 4420 -i 4 00:19:23.136 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:23.136 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:19:23.136 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:23.136 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:19:23.136 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:19:23.136 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:19:25.045 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:25.045 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:25.045 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:25.304 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:19:25.304 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:25.304 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:19:25.304 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:25.304 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:25.305 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:25.305 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:25.305 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:25.305 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:25.305 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:25.305 [ 0]:0x1 00:19:25.305 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:25.305 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:25.305 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a29e8276895433f8ccce2513c789b66 00:19:25.305 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a29e8276895433f8ccce2513c789b66 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:25.305 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:25.305 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:25.305 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:25.305 [ 1]:0x2 00:19:25.305 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:25.305 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:25.566 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd14ac4c561f4b8bb54aaaeec9c84f02 00:19:25.566 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd14ac4c561f4b8bb54aaaeec9c84f02 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:25.566 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:25.566 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:25.566 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:25.566 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:25.566 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:25.566 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.566 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:25.566 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.566 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:25.566 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:25.566 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:25.566 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:25.566 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:25.828 [ 0]:0x2 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd14ac4c561f4b8bb54aaaeec9c84f02 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd14ac4c561f4b8bb54aaaeec9c84f02 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:25.828 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:25.828 [2024-10-13 14:15:29.515828] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:25.828 request: 00:19:25.828 { 00:19:25.828 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.828 "nsid": 2, 00:19:25.828 "host": "nqn.2016-06.io.spdk:host1", 00:19:25.828 "method": "nvmf_ns_remove_host", 00:19:25.828 "req_id": 1 00:19:25.828 } 00:19:25.828 Got JSON-RPC error response 00:19:25.828 response: 00:19:25.828 { 00:19:25.828 "code": -32602, 00:19:25.828 "message": "Invalid parameters" 00:19:25.828 } 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:26.090 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:26.091 [ 0]:0x2 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd14ac4c561f4b8bb54aaaeec9c84f02 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd14ac4c561f4b8bb54aaaeec9c84f02 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:26.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1685831 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1685831 /var/tmp/host.sock 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1685831 ']' 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:26.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:26.091 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:26.091 [2024-10-13 14:15:29.781686] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:19:26.091 [2024-10-13 14:15:29.781737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1685831 ] 00:19:26.352 [2024-10-13 14:15:29.912592] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:26.352 [2024-10-13 14:15:29.959987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.352 [2024-10-13 14:15:29.977906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.923 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:26.923 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:19:26.923 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:27.184 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:27.445 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid bcc8acc3-27f6-4473-9601-87f0b7527470 00:19:27.445 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:19:27.446 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g BCC8ACC327F64473960187F0B7527470 -i 00:19:27.446 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid fea5e2a3-28a1-417c-8006-a7d0a7c2cb22 00:19:27.446 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:19:27.446 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g FEA5E2A328A1417C8006A7D0A7C2CB22 -i 00:19:27.707 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:27.968 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:27.968 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:27.968 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:28.228 nvme0n1 00:19:28.228 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:28.228 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:28.799 nvme1n2 00:19:28.799 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:28.799 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:28.799 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:28.799 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:28.799 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:29.060 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:29.060 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:29.060 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:29.060 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:29.060 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ bcc8acc3-27f6-4473-9601-87f0b7527470 == \b\c\c\8\a\c\c\3\-\2\7\f\6\-\4\4\7\3\-\9\6\0\1\-\8\7\f\0\b\7\5\2\7\4\7\0 ]] 00:19:29.060 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:29.060 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:29.060 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:29.320 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ fea5e2a3-28a1-417c-8006-a7d0a7c2cb22 == \f\e\a\5\e\2\a\3\-\2\8\a\1\-\4\1\7\c\-\8\0\0\6\-\a\7\d\0\a\7\c\2\c\b\2\2 ]] 00:19:29.320 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1685831 00:19:29.320 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1685831 ']' 00:19:29.320 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1685831 00:19:29.320 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:19:29.321 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:29.321 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1685831 00:19:29.321 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:29.321 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:29.321 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1685831' 00:19:29.321 killing process with pid 1685831 00:19:29.321 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1685831 00:19:29.321 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1685831 00:19:29.581 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:29.841 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:19:29.841 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:19:29.841 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:29.841 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:29.841 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:29.841 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:29.841 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:29.842 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:29.842 rmmod nvme_tcp 00:19:29.842 rmmod nvme_fabrics 00:19:29.842 rmmod nvme_keyring 00:19:29.842 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:29.842 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:29.842 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:29.842 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 1683334 ']' 00:19:29.842 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 1683334 00:19:29.842 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1683334 ']' 00:19:29.842 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1683334 00:19:29.842 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:19:29.842 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:29.842 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1683334 00:19:29.842 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:29.842 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:29.842 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1683334' 00:19:29.842 killing process with pid 1683334 00:19:29.842 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1683334 00:19:29.842 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1683334 00:19:30.103 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:30.103 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:30.103 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:30.103 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:19:30.103 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:19:30.103 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:30.103 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:19:30.103 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:30.103 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:30.103 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.103 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.103 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.015 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:32.015 00:19:32.015 real 0m25.563s 00:19:32.015 user 0m25.657s 00:19:32.015 sys 0m8.001s 00:19:32.015 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:32.015 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:32.016 ************************************ 00:19:32.016 END TEST nvmf_ns_masking 00:19:32.016 ************************************ 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:32.277 ************************************ 00:19:32.277 START TEST nvmf_nvme_cli 00:19:32.277 ************************************ 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:32.277 * Looking for test storage... 00:19:32.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:32.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.277 --rc genhtml_branch_coverage=1 00:19:32.277 --rc genhtml_function_coverage=1 00:19:32.277 --rc genhtml_legend=1 00:19:32.277 --rc geninfo_all_blocks=1 00:19:32.277 --rc geninfo_unexecuted_blocks=1 00:19:32.277 00:19:32.277 ' 00:19:32.277 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:32.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.278 --rc genhtml_branch_coverage=1 00:19:32.278 --rc genhtml_function_coverage=1 00:19:32.278 --rc genhtml_legend=1 00:19:32.278 --rc geninfo_all_blocks=1 00:19:32.278 --rc geninfo_unexecuted_blocks=1 00:19:32.278 00:19:32.278 ' 00:19:32.278 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:32.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.278 --rc genhtml_branch_coverage=1 00:19:32.278 --rc genhtml_function_coverage=1 00:19:32.278 --rc genhtml_legend=1 00:19:32.278 --rc geninfo_all_blocks=1 00:19:32.278 --rc geninfo_unexecuted_blocks=1 00:19:32.278 00:19:32.278 ' 00:19:32.278 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:32.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.278 --rc genhtml_branch_coverage=1 00:19:32.278 --rc genhtml_function_coverage=1 00:19:32.278 --rc genhtml_legend=1 00:19:32.278 --rc geninfo_all_blocks=1 00:19:32.278 --rc geninfo_unexecuted_blocks=1 00:19:32.278 00:19:32.278 ' 00:19:32.278 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.278 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:32.278 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.278 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.278 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.278 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.278 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.278 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.278 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.278 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.278 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.539 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.539 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:32.539 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:32.539 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.539 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.539 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.539 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.539 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:32.539 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:32.540 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.540 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:32.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:32.540 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:40.684 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.684 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:40.684 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:40.684 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:40.685 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:40.685 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:40.685 Found net devices under 0000:31:00.0: cvl_0_0 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:40.685 Found net devices under 0000:31:00.1: cvl_0_1 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:40.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:19:40.685 00:19:40.685 --- 10.0.0.2 ping statistics --- 00:19:40.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.685 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:19:40.685 00:19:40.685 --- 10.0.0.1 ping statistics --- 00:19:40.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.685 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:40.685 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:40.686 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:40.686 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:40.686 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=1690845 00:19:40.686 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 1690845 00:19:40.686 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:40.686 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1690845 ']' 00:19:40.686 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.686 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.686 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.686 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.686 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:40.686 [2024-10-13 14:15:43.822704] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:19:40.686 [2024-10-13 14:15:43.822777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.686 [2024-10-13 14:15:43.966493] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:40.686 [2024-10-13 14:15:44.014825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:40.686 [2024-10-13 14:15:44.044260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.686 [2024-10-13 14:15:44.044303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.686 [2024-10-13 14:15:44.044311] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.686 [2024-10-13 14:15:44.044319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.686 [2024-10-13 14:15:44.044325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.686 [2024-10-13 14:15:44.046632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.686 [2024-10-13 14:15:44.046789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.686 [2024-10-13 14:15:44.046944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.686 [2024-10-13 14:15:44.046944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:40.947 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.947 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:19:40.947 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:40.947 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:40.947 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:41.208 [2024-10-13 14:15:44.701921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:41.208 Malloc0 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:41.208 Malloc1 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:41.208 [2024-10-13 14:15:44.812667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.208 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:19:41.468 00:19:41.468 Discovery Log Number of Records 2, Generation counter 2 00:19:41.468 =====Discovery Log Entry 0====== 00:19:41.468 trtype: tcp 00:19:41.468 adrfam: ipv4 00:19:41.468 subtype: current discovery subsystem 00:19:41.468 treq: not required 00:19:41.468 portid: 0 00:19:41.468 trsvcid: 4420 00:19:41.468 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:41.468 traddr: 10.0.0.2 00:19:41.468 eflags: explicit discovery connections, duplicate discovery information 00:19:41.468 sectype: none 00:19:41.468 =====Discovery Log Entry 1====== 00:19:41.468 trtype: tcp 00:19:41.468 adrfam: ipv4 00:19:41.468 subtype: nvme subsystem 00:19:41.468 treq: not required 00:19:41.468 portid: 0 00:19:41.468 trsvcid: 4420 00:19:41.469 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:41.469 traddr: 10.0.0.2 00:19:41.469 eflags: none 00:19:41.469 sectype: none 00:19:41.469 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:41.469 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:41.469 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:19:41.469 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:41.469 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:19:41.469 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:19:41.469 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:41.469 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:19:41.469 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:41.469 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:41.469 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:42.853 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:42.853 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:19:42.853 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:42.853 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:19:42.853 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:19:42.853 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:45.396 /dev/nvme0n2 ]] 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:45.396 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:45.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:45.658 rmmod nvme_tcp 00:19:45.658 rmmod nvme_fabrics 00:19:45.658 rmmod nvme_keyring 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 1690845 ']' 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 1690845 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1690845 ']' 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1690845 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1690845 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1690845' 00:19:45.658 killing process with pid 1690845 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1690845 00:19:45.658 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1690845 00:19:45.919 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:45.919 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:45.919 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:45.919 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:45.919 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:19:45.919 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:45.919 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:19:45.919 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:45.919 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:45.919 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.919 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:45.919 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:48.466 00:19:48.466 real 0m15.781s 00:19:48.466 user 0m23.960s 00:19:48.466 sys 0m6.556s 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:48.466 ************************************ 00:19:48.466 END TEST nvmf_nvme_cli 00:19:48.466 ************************************ 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:48.466 ************************************ 00:19:48.466 START TEST nvmf_vfio_user 00:19:48.466 ************************************ 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:48.466 * Looking for test storage... 00:19:48.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:48.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.466 --rc genhtml_branch_coverage=1 00:19:48.466 --rc genhtml_function_coverage=1 00:19:48.466 --rc genhtml_legend=1 00:19:48.466 --rc geninfo_all_blocks=1 00:19:48.466 --rc geninfo_unexecuted_blocks=1 00:19:48.466 00:19:48.466 ' 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:48.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.466 --rc genhtml_branch_coverage=1 00:19:48.466 --rc genhtml_function_coverage=1 00:19:48.466 --rc genhtml_legend=1 00:19:48.466 --rc geninfo_all_blocks=1 00:19:48.466 --rc geninfo_unexecuted_blocks=1 00:19:48.466 00:19:48.466 ' 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:48.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.466 --rc genhtml_branch_coverage=1 00:19:48.466 --rc genhtml_function_coverage=1 00:19:48.466 --rc genhtml_legend=1 00:19:48.466 --rc geninfo_all_blocks=1 00:19:48.466 --rc geninfo_unexecuted_blocks=1 00:19:48.466 00:19:48.466 ' 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:48.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.466 --rc genhtml_branch_coverage=1 00:19:48.466 --rc genhtml_function_coverage=1 00:19:48.466 --rc genhtml_legend=1 00:19:48.466 --rc geninfo_all_blocks=1 00:19:48.466 --rc geninfo_unexecuted_blocks=1 00:19:48.466 00:19:48.466 ' 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:19:48.466 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:48.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1692419 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1692419' 00:19:48.467 Process pid: 1692419 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1692419 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1692419 ']' 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:48.467 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:48.467 [2024-10-13 14:15:51.933143] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:19:48.467 [2024-10-13 14:15:51.933191] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.467 [2024-10-13 14:15:52.059895] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:48.467 [2024-10-13 14:15:52.109012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:48.467 [2024-10-13 14:15:52.134305] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.467 [2024-10-13 14:15:52.134344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.467 [2024-10-13 14:15:52.134350] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.467 [2024-10-13 14:15:52.134355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.467 [2024-10-13 14:15:52.134360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.467 [2024-10-13 14:15:52.136227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.467 [2024-10-13 14:15:52.136394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.467 [2024-10-13 14:15:52.136542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.467 [2024-10-13 14:15:52.136542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:49.039 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:49.039 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:19:49.039 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:50.424 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:19:50.424 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:50.424 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:50.424 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:50.424 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:50.424 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:50.424 Malloc1 00:19:50.685 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:50.685 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:50.945 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:51.206 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:51.206 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:51.206 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:51.206 Malloc2 00:19:51.206 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:51.467 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:51.728 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:51.728 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:19:51.728 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:19:51.992 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:51.992 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:51.992 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:19:51.992 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:51.992 [2024-10-13 14:15:55.458796] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:19:51.992 [2024-10-13 14:15:55.458840] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1693115 ] 00:19:51.992 [2024-10-13 14:15:55.570412] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:51.992 [2024-10-13 14:15:55.587244] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:19:51.992 [2024-10-13 14:15:55.594258] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:51.992 [2024-10-13 14:15:55.594273] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3b14948000 00:19:51.992 [2024-10-13 14:15:55.595252] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:51.992 [2024-10-13 14:15:55.596262] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:51.992 [2024-10-13 14:15:55.597263] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:51.992 [2024-10-13 14:15:55.598264] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:51.992 [2024-10-13 14:15:55.599265] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:51.992 [2024-10-13 14:15:55.600268] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:51.992 [2024-10-13 14:15:55.601275] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:51.992 [2024-10-13 14:15:55.602275] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:51.992 [2024-10-13 14:15:55.603281] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:51.992 [2024-10-13 14:15:55.603289] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3b1364b000 00:19:51.992 [2024-10-13 14:15:55.604204] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:51.992 [2024-10-13 14:15:55.616343] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:19:51.992 [2024-10-13 14:15:55.616366] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:19:51.992 [2024-10-13 14:15:55.621340] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:51.992 [2024-10-13 14:15:55.621375] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:51.992 [2024-10-13 14:15:55.621439] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:19:51.992 [2024-10-13 14:15:55.621453] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:19:51.992 [2024-10-13 14:15:55.621457] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:19:51.992 [2024-10-13 14:15:55.622340] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:19:51.992 [2024-10-13 14:15:55.622347] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:19:51.992 [2024-10-13 14:15:55.622352] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:19:51.992 [2024-10-13 14:15:55.623346] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:51.992 [2024-10-13 14:15:55.623352] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:19:51.992 [2024-10-13 14:15:55.623357] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:19:51.992 [2024-10-13 14:15:55.624346] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:19:51.992 [2024-10-13 14:15:55.624352] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:51.992 [2024-10-13 14:15:55.625351] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:19:51.992 [2024-10-13 14:15:55.625359] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:19:51.992 [2024-10-13 14:15:55.625363] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:19:51.992 [2024-10-13 14:15:55.625367] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:51.992 [2024-10-13 14:15:55.625471] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:19:51.992 [2024-10-13 14:15:55.625475] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:51.992 [2024-10-13 14:15:55.625479] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003a0000 00:19:51.992 [2024-10-13 14:15:55.626357] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x20000039e000 00:19:51.992 [2024-10-13 14:15:55.627360] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:19:51.992 [2024-10-13 14:15:55.628370] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:51.992 [2024-10-13 14:15:55.629369] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:51.992 [2024-10-13 14:15:55.629412] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:51.992 [2024-10-13 14:15:55.630382] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:19:51.992 [2024-10-13 14:15:55.630387] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:51.992 [2024-10-13 14:15:55.630391] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:19:51.992 [2024-10-13 14:15:55.630406] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:19:51.992 [2024-10-13 14:15:55.630411] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:19:51.992 [2024-10-13 14:15:55.630423] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002db000 len:4096 00:19:51.992 [2024-10-13 14:15:55.630427] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002db000 00:19:51.992 [2024-10-13 14:15:55.630429] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:51.992 [2024-10-13 14:15:55.630439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002db000 PRP2 0x0 00:19:51.992 [2024-10-13 14:15:55.630472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:51.992 [2024-10-13 14:15:55.630479] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:19:51.992 [2024-10-13 14:15:55.630482] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:19:51.992 [2024-10-13 14:15:55.630486] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:19:51.992 [2024-10-13 14:15:55.630489] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:51.992 [2024-10-13 14:15:55.630492] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:19:51.992 [2024-10-13 14:15:55.630498] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:19:51.992 [2024-10-13 14:15:55.630502] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:19:51.992 [2024-10-13 14:15:55.630507] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:19:51.992 [2024-10-13 14:15:55.630514] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:51.992 [2024-10-13 14:15:55.630524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:51.993 [2024-10-13 14:15:55.630532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.993 [2024-10-13 14:15:55.630538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.993 [2024-10-13 14:15:55.630544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.993 [2024-10-13 14:15:55.630550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.993 [2024-10-13 14:15:55.630553] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:51.993 [2024-10-13 14:15:55.630560] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:51.993 [2024-10-13 14:15:55.630566] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:51.993 [2024-10-13 14:15:55.630574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:51.993 [2024-10-13 14:15:55.630579] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:19:51.993 [2024-10-13 14:15:55.630582] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:51.993 [2024-10-13 14:15:55.630587] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:19:51.993 [2024-10-13 14:15:55.630593] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:51.993 [2024-10-13 14:15:55.630599] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:51.993 [2024-10-13 14:15:55.630610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:51.993 [2024-10-13 14:15:55.630652] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:19:51.993 [2024-10-13 14:15:55.630658] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:51.993 [2024-10-13 14:15:55.630664] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002d9000 len:4096 00:19:51.993 [2024-10-13 14:15:55.630667] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002d9000 00:19:51.993 [2024-10-13 14:15:55.630669] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:51.993 [2024-10-13 14:15:55.630674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002d9000 PRP2 0x0 00:19:51.993 [2024-10-13 14:15:55.630683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:51.993 [2024-10-13 14:15:55.630690] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:19:51.993 [2024-10-13 14:15:55.630699] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:19:51.993 [2024-10-13 14:15:55.630705] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:19:51.993 [2024-10-13 14:15:55.630709] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002db000 len:4096 00:19:51.993 [2024-10-13 14:15:55.630712] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002db000 00:19:51.993 [2024-10-13 14:15:55.630715] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:51.993 [2024-10-13 14:15:55.630719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002db000 PRP2 0x0 00:19:51.993 [2024-10-13 14:15:55.630736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:51.993 [2024-10-13 14:15:55.630746] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:51.993 [2024-10-13 14:15:55.630751] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:51.993 [2024-10-13 14:15:55.630756] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002db000 len:4096 00:19:51.993 [2024-10-13 14:15:55.630759] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002db000 00:19:51.993 [2024-10-13 14:15:55.630761] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:51.993 [2024-10-13 14:15:55.630765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002db000 PRP2 0x0 00:19:51.993 [2024-10-13 14:15:55.630778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:51.993 [2024-10-13 14:15:55.630784] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:51.993 [2024-10-13 14:15:55.630788] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:19:51.993 [2024-10-13 14:15:55.630793] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:19:51.993 [2024-10-13 14:15:55.630797] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:51.993 [2024-10-13 14:15:55.630801] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:51.993 [2024-10-13 14:15:55.630805] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:19:51.993 [2024-10-13 14:15:55.630808] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:19:51.993 [2024-10-13 14:15:55.630811] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:19:51.993 [2024-10-13 14:15:55.630815] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:19:51.993 [2024-10-13 14:15:55.630830] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:51.993 [2024-10-13 14:15:55.630840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:51.993 [2024-10-13 14:15:55.630848] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:51.993 [2024-10-13 14:15:55.630858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:51.993 [2024-10-13 14:15:55.630866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:51.993 [2024-10-13 14:15:55.630873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:51.993 [2024-10-13 14:15:55.630881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:51.993 [2024-10-13 14:15:55.630887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:51.993 [2024-10-13 14:15:55.630897] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002d6000 len:8192 00:19:51.993 [2024-10-13 14:15:55.630900] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002d6000 00:19:51.993 [2024-10-13 14:15:55.630903] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002d7000 00:19:51.993 [2024-10-13 14:15:55.630905] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002d7000 00:19:51.993 [2024-10-13 14:15:55.630908] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:51.993 [2024-10-13 14:15:55.630912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002d6000 PRP2 0x2000002d7000 00:19:51.993 [2024-10-13 14:15:55.630917] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002dc000 len:512 00:19:51.993 [2024-10-13 14:15:55.630920] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002dc000 00:19:51.993 [2024-10-13 14:15:55.630923] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:51.993 [2024-10-13 14:15:55.630927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002dc000 PRP2 0x0 00:19:51.993 [2024-10-13 14:15:55.630932] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002db000 len:512 00:19:51.993 [2024-10-13 14:15:55.630935] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002db000 00:19:51.993 [2024-10-13 14:15:55.630938] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:51.993 [2024-10-13 14:15:55.630942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002db000 PRP2 0x0 00:19:51.993 [2024-10-13 14:15:55.630947] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002d4000 len:4096 00:19:51.993 [2024-10-13 14:15:55.630950] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002d4000 00:19:51.993 [2024-10-13 14:15:55.630952] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:51.993 [2024-10-13 14:15:55.630957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002d4000 PRP2 0x0 00:19:51.993 [2024-10-13 14:15:55.630962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:51.993 [2024-10-13 14:15:55.630970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:51.993 [2024-10-13 14:15:55.630978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:51.993 [2024-10-13 14:15:55.630984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:51.993 ===================================================== 00:19:51.993 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:51.993 ===================================================== 00:19:51.993 Controller Capabilities/Features 00:19:51.993 ================================ 00:19:51.993 Vendor ID: 4e58 00:19:51.993 Subsystem Vendor ID: 4e58 00:19:51.993 Serial Number: SPDK1 00:19:51.993 Model Number: SPDK bdev Controller 00:19:51.993 Firmware Version: 25.01 00:19:51.993 Recommended Arb Burst: 6 00:19:51.993 IEEE OUI Identifier: 8d 6b 50 00:19:51.993 Multi-path I/O 00:19:51.993 May have multiple subsystem ports: Yes 00:19:51.993 May have multiple controllers: Yes 00:19:51.993 Associated with SR-IOV VF: No 00:19:51.993 Max Data Transfer Size: 131072 00:19:51.993 Max Number of Namespaces: 32 00:19:51.993 Max Number of I/O Queues: 127 00:19:51.993 NVMe Specification Version (VS): 1.3 00:19:51.993 NVMe Specification Version (Identify): 1.3 00:19:51.993 Maximum Queue Entries: 256 00:19:51.993 Contiguous Queues Required: Yes 00:19:51.993 Arbitration Mechanisms Supported 00:19:51.993 Weighted Round Robin: Not Supported 00:19:51.993 Vendor Specific: Not Supported 00:19:51.993 Reset Timeout: 15000 ms 00:19:51.994 Doorbell Stride: 4 bytes 00:19:51.994 NVM Subsystem Reset: Not Supported 00:19:51.994 Command Sets Supported 00:19:51.994 NVM Command Set: Supported 00:19:51.994 Boot Partition: Not Supported 00:19:51.994 Memory Page Size Minimum: 4096 bytes 00:19:51.994 Memory Page Size Maximum: 4096 bytes 00:19:51.994 Persistent Memory Region: Not Supported 00:19:51.994 Optional Asynchronous Events Supported 00:19:51.994 Namespace Attribute Notices: Supported 00:19:51.994 Firmware Activation Notices: Not Supported 00:19:51.994 ANA Change Notices: Not Supported 00:19:51.994 PLE Aggregate Log Change Notices: Not Supported 00:19:51.994 LBA Status Info Alert Notices: Not Supported 00:19:51.994 EGE Aggregate Log Change Notices: Not Supported 00:19:51.994 Normal NVM Subsystem Shutdown event: Not Supported 00:19:51.994 Zone Descriptor Change Notices: Not Supported 00:19:51.994 Discovery Log Change Notices: Not Supported 00:19:51.994 Controller Attributes 00:19:51.994 128-bit Host Identifier: Supported 00:19:51.994 Non-Operational Permissive Mode: Not Supported 00:19:51.994 NVM Sets: Not Supported 00:19:51.994 Read Recovery Levels: Not Supported 00:19:51.994 Endurance Groups: Not Supported 00:19:51.994 Predictable Latency Mode: Not Supported 00:19:51.994 Traffic Based Keep ALive: Not Supported 00:19:51.994 Namespace Granularity: Not Supported 00:19:51.994 SQ Associations: Not Supported 00:19:51.994 UUID List: Not Supported 00:19:51.994 Multi-Domain Subsystem: Not Supported 00:19:51.994 Fixed Capacity Management: Not Supported 00:19:51.994 Variable Capacity Management: Not Supported 00:19:51.994 Delete Endurance Group: Not Supported 00:19:51.994 Delete NVM Set: Not Supported 00:19:51.994 Extended LBA Formats Supported: Not Supported 00:19:51.994 Flexible Data Placement Supported: Not Supported 00:19:51.994 00:19:51.994 Controller Memory Buffer Support 00:19:51.994 ================================ 00:19:51.994 Supported: No 00:19:51.994 00:19:51.994 Persistent Memory Region Support 00:19:51.994 ================================ 00:19:51.994 Supported: No 00:19:51.994 00:19:51.994 Admin Command Set Attributes 00:19:51.994 ============================ 00:19:51.994 Security Send/Receive: Not Supported 00:19:51.994 Format NVM: Not Supported 00:19:51.994 Firmware Activate/Download: Not Supported 00:19:51.994 Namespace Management: Not Supported 00:19:51.994 Device Self-Test: Not Supported 00:19:51.994 Directives: Not Supported 00:19:51.994 NVMe-MI: Not Supported 00:19:51.994 Virtualization Management: Not Supported 00:19:51.994 Doorbell Buffer Config: Not Supported 00:19:51.994 Get LBA Status Capability: Not Supported 00:19:51.994 Command & Feature Lockdown Capability: Not Supported 00:19:51.994 Abort Command Limit: 4 00:19:51.994 Async Event Request Limit: 4 00:19:51.994 Number of Firmware Slots: N/A 00:19:51.994 Firmware Slot 1 Read-Only: N/A 00:19:51.994 Firmware Activation Without Reset: N/A 00:19:51.994 Multiple Update Detection Support: N/A 00:19:51.994 Firmware Update Granularity: No Information Provided 00:19:51.994 Per-Namespace SMART Log: No 00:19:51.994 Asymmetric Namespace Access Log Page: Not Supported 00:19:51.994 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:19:51.994 Command Effects Log Page: Supported 00:19:51.994 Get Log Page Extended Data: Supported 00:19:51.994 Telemetry Log Pages: Not Supported 00:19:51.994 Persistent Event Log Pages: Not Supported 00:19:51.994 Supported Log Pages Log Page: May Support 00:19:51.994 Commands Supported & Effects Log Page: Not Supported 00:19:51.994 Feature Identifiers & Effects Log Page:May Support 00:19:51.994 NVMe-MI Commands & Effects Log Page: May Support 00:19:51.994 Data Area 4 for Telemetry Log: Not Supported 00:19:51.994 Error Log Page Entries Supported: 128 00:19:51.994 Keep Alive: Supported 00:19:51.994 Keep Alive Granularity: 10000 ms 00:19:51.994 00:19:51.994 NVM Command Set Attributes 00:19:51.994 ========================== 00:19:51.994 Submission Queue Entry Size 00:19:51.994 Max: 64 00:19:51.994 Min: 64 00:19:51.994 Completion Queue Entry Size 00:19:51.994 Max: 16 00:19:51.994 Min: 16 00:19:51.994 Number of Namespaces: 32 00:19:51.994 Compare Command: Supported 00:19:51.994 Write Uncorrectable Command: Not Supported 00:19:51.994 Dataset Management Command: Supported 00:19:51.994 Write Zeroes Command: Supported 00:19:51.994 Set Features Save Field: Not Supported 00:19:51.994 Reservations: Not Supported 00:19:51.994 Timestamp: Not Supported 00:19:51.994 Copy: Supported 00:19:51.994 Volatile Write Cache: Present 00:19:51.994 Atomic Write Unit (Normal): 1 00:19:51.994 Atomic Write Unit (PFail): 1 00:19:51.994 Atomic Compare & Write Unit: 1 00:19:51.994 Fused Compare & Write: Supported 00:19:51.994 Scatter-Gather List 00:19:51.994 SGL Command Set: Supported (Dword aligned) 00:19:51.994 SGL Keyed: Not Supported 00:19:51.994 SGL Bit Bucket Descriptor: Not Supported 00:19:51.994 SGL Metadata Pointer: Not Supported 00:19:51.994 Oversized SGL: Not Supported 00:19:51.994 SGL Metadata Address: Not Supported 00:19:51.994 SGL Offset: Not Supported 00:19:51.994 Transport SGL Data Block: Not Supported 00:19:51.994 Replay Protected Memory Block: Not Supported 00:19:51.994 00:19:51.994 Firmware Slot Information 00:19:51.994 ========================= 00:19:51.994 Active slot: 1 00:19:51.994 Slot 1 Firmware Revision: 25.01 00:19:51.994 00:19:51.994 00:19:51.994 Commands Supported and Effects 00:19:51.994 ============================== 00:19:51.994 Admin Commands 00:19:51.994 -------------- 00:19:51.994 Get Log Page (02h): Supported 00:19:51.994 Identify (06h): Supported 00:19:51.994 Abort (08h): Supported 00:19:51.994 Set Features (09h): Supported 00:19:51.994 Get Features (0Ah): Supported 00:19:51.994 Asynchronous Event Request (0Ch): Supported 00:19:51.994 Keep Alive (18h): Supported 00:19:51.994 I/O Commands 00:19:51.994 ------------ 00:19:51.994 Flush (00h): Supported LBA-Change 00:19:51.994 Write (01h): Supported LBA-Change 00:19:51.994 Read (02h): Supported 00:19:51.994 Compare (05h): Supported 00:19:51.994 Write Zeroes (08h): Supported LBA-Change 00:19:51.994 Dataset Management (09h): Supported LBA-Change 00:19:51.994 Copy (19h): Supported LBA-Change 00:19:51.994 00:19:51.994 Error Log 00:19:51.994 ========= 00:19:51.994 00:19:51.994 Arbitration 00:19:51.994 =========== 00:19:51.994 Arbitration Burst: 1 00:19:51.994 00:19:51.994 Power Management 00:19:51.994 ================ 00:19:51.994 Number of Power States: 1 00:19:51.994 Current Power State: Power State #0 00:19:51.994 Power State #0: 00:19:51.994 Max Power: 0.00 W 00:19:51.994 Non-Operational State: Operational 00:19:51.994 Entry Latency: Not Reported 00:19:51.994 Exit Latency: Not Reported 00:19:51.994 Relative Read Throughput: 0 00:19:51.994 Relative Read Latency: 0 00:19:51.994 Relative Write Throughput: 0 00:19:51.994 Relative Write Latency: 0 00:19:51.994 Idle Power: Not Reported 00:19:51.994 Active Power: Not Reported 00:19:51.994 Non-Operational Permissive Mode: Not Supported 00:19:51.994 00:19:51.994 Health Information 00:19:51.994 ================== 00:19:51.994 Critical Warnings: 00:19:51.994 Available Spare Space: OK 00:19:51.994 Temperature: OK 00:19:51.994 Device Reliability: OK 00:19:51.994 Read Only: No 00:19:51.994 Volatile Memory Backup: OK 00:19:51.994 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:51.994 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:51.994 Available Spare: 0% 00:19:51.994 Available Sp[2024-10-13 14:15:55.631058] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:51.994 [2024-10-13 14:15:55.631071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:51.994 [2024-10-13 14:15:55.631091] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:19:51.994 [2024-10-13 14:15:55.631098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.994 [2024-10-13 14:15:55.631103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.994 [2024-10-13 14:15:55.631107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.994 [2024-10-13 14:15:55.631111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.994 [2024-10-13 14:15:55.631382] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:51.994 [2024-10-13 14:15:55.631389] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:19:51.994 [2024-10-13 14:15:55.632382] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:51.994 [2024-10-13 14:15:55.632419] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:19:51.994 [2024-10-13 14:15:55.632424] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:19:51.994 [2024-10-13 14:15:55.633385] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:19:51.994 [2024-10-13 14:15:55.633393] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:19:51.994 [2024-10-13 14:15:55.633446] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:19:51.994 [2024-10-13 14:15:55.639069] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:51.994 are Threshold: 0% 00:19:51.994 Life Percentage Used: 0% 00:19:51.994 Data Units Read: 0 00:19:51.995 Data Units Written: 0 00:19:51.995 Host Read Commands: 0 00:19:51.995 Host Write Commands: 0 00:19:51.995 Controller Busy Time: 0 minutes 00:19:51.995 Power Cycles: 0 00:19:51.995 Power On Hours: 0 hours 00:19:51.995 Unsafe Shutdowns: 0 00:19:51.995 Unrecoverable Media Errors: 0 00:19:51.995 Lifetime Error Log Entries: 0 00:19:51.995 Warning Temperature Time: 0 minutes 00:19:51.995 Critical Temperature Time: 0 minutes 00:19:51.995 00:19:51.995 Number of Queues 00:19:51.995 ================ 00:19:51.995 Number of I/O Submission Queues: 127 00:19:51.995 Number of I/O Completion Queues: 127 00:19:51.995 00:19:51.995 Active Namespaces 00:19:51.995 ================= 00:19:51.995 Namespace ID:1 00:19:51.995 Error Recovery Timeout: Unlimited 00:19:51.995 Command Set Identifier: NVM (00h) 00:19:51.995 Deallocate: Supported 00:19:51.995 Deallocated/Unwritten Error: Not Supported 00:19:51.995 Deallocated Read Value: Unknown 00:19:51.995 Deallocate in Write Zeroes: Not Supported 00:19:51.995 Deallocated Guard Field: 0xFFFF 00:19:51.995 Flush: Supported 00:19:51.995 Reservation: Supported 00:19:51.995 Namespace Sharing Capabilities: Multiple Controllers 00:19:51.995 Size (in LBAs): 131072 (0GiB) 00:19:51.995 Capacity (in LBAs): 131072 (0GiB) 00:19:51.995 Utilization (in LBAs): 131072 (0GiB) 00:19:51.995 NGUID: 9EC33F69686E4DA49B26219222D8A45F 00:19:51.995 UUID: 9ec33f69-686e-4da4-9b26-219222d8a45f 00:19:51.995 Thin Provisioning: Not Supported 00:19:51.995 Per-NS Atomic Units: Yes 00:19:51.995 Atomic Boundary Size (Normal): 0 00:19:51.995 Atomic Boundary Size (PFail): 0 00:19:51.995 Atomic Boundary Offset: 0 00:19:51.995 Maximum Single Source Range Length: 65535 00:19:51.995 Maximum Copy Length: 65535 00:19:51.995 Maximum Source Range Count: 1 00:19:51.995 NGUID/EUI64 Never Reused: No 00:19:51.995 Namespace Write Protected: No 00:19:51.995 Number of LBA Formats: 1 00:19:51.995 Current LBA Format: LBA Format #00 00:19:51.995 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:51.995 00:19:51.995 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:52.255 [2024-10-13 14:15:55.916803] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:57.542 Initializing NVMe Controllers 00:19:57.542 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:57.542 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:57.542 Initialization complete. Launching workers. 00:19:57.542 ======================================================== 00:19:57.542 Latency(us) 00:19:57.542 Device Information : IOPS MiB/s Average min max 00:19:57.542 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39916.80 155.93 3206.54 844.82 8922.94 00:19:57.542 ======================================================== 00:19:57.542 Total : 39916.80 155.93 3206.54 844.82 8922.94 00:19:57.542 00:19:57.542 [2024-10-13 14:16:00.922254] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:57.542 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:57.542 [2024-10-13 14:16:01.209691] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:02.924 Initializing NVMe Controllers 00:20:02.924 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:02.924 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:20:02.924 Initialization complete. Launching workers. 00:20:02.924 ======================================================== 00:20:02.924 Latency(us) 00:20:02.924 Device Information : IOPS MiB/s Average min max 00:20:02.924 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15951.31 62.31 8030.02 6001.07 16000.34 00:20:02.924 ======================================================== 00:20:02.924 Total : 15951.31 62.31 8030.02 6001.07 16000.34 00:20:02.924 00:20:02.924 [2024-10-13 14:16:06.242649] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:02.924 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:20:02.924 [2024-10-13 14:16:06.533089] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:08.274 [2024-10-13 14:16:11.592245] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:08.274 Initializing NVMe Controllers 00:20:08.274 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:08.274 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:08.274 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:20:08.274 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:20:08.274 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:20:08.274 Initialization complete. Launching workers. 00:20:08.274 Starting thread on core 2 00:20:08.274 Starting thread on core 3 00:20:08.274 Starting thread on core 1 00:20:08.274 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:20:08.274 [2024-10-13 14:16:11.932303] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:11.571 [2024-10-13 14:16:14.980407] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:11.571 Initializing NVMe Controllers 00:20:11.571 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:11.571 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:11.571 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:20:11.571 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:20:11.571 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:20:11.571 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:20:11.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:20:11.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:11.571 Initialization complete. Launching workers. 00:20:11.571 Starting thread on core 1 with urgent priority queue 00:20:11.571 Starting thread on core 2 with urgent priority queue 00:20:11.571 Starting thread on core 3 with urgent priority queue 00:20:11.571 Starting thread on core 0 with urgent priority queue 00:20:11.571 SPDK bdev Controller (SPDK1 ) core 0: 5363.67 IO/s 18.64 secs/100000 ios 00:20:11.571 SPDK bdev Controller (SPDK1 ) core 1: 4781.33 IO/s 20.91 secs/100000 ios 00:20:11.571 SPDK bdev Controller (SPDK1 ) core 2: 6347.00 IO/s 15.76 secs/100000 ios 00:20:11.571 SPDK bdev Controller (SPDK1 ) core 3: 5571.67 IO/s 17.95 secs/100000 ios 00:20:11.571 ======================================================== 00:20:11.571 00:20:11.571 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:20:11.832 [2024-10-13 14:16:15.307436] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:11.832 Initializing NVMe Controllers 00:20:11.832 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:11.832 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:11.832 Namespace ID: 1 size: 0GB 00:20:11.832 Initialization complete. 00:20:11.832 INFO: using host memory buffer for IO 00:20:11.832 Hello world! 00:20:11.832 [2024-10-13 14:16:15.341569] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:11.832 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:20:12.092 [2024-10-13 14:16:15.665361] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:13.034 Initializing NVMe Controllers 00:20:13.034 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:13.034 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:13.034 Initialization complete. Launching workers. 00:20:13.034 submit (in ns) avg, min, max = 6213.4, 2876.7, 4007864.2 00:20:13.034 complete (in ns) avg, min, max = 17207.1, 1629.6, 4007378.0 00:20:13.034 00:20:13.034 Submit histogram 00:20:13.034 ================ 00:20:13.034 Range in us Cumulative Count 00:20:13.034 2.873 - 2.887: 0.0198% ( 4) 00:20:13.034 2.887 - 2.900: 0.1188% ( 20) 00:20:13.034 2.900 - 2.913: 0.6683% ( 111) 00:20:13.034 2.913 - 2.927: 2.7226% ( 415) 00:20:13.034 2.927 - 2.940: 5.5492% ( 571) 00:20:13.034 2.940 - 2.954: 9.2372% ( 745) 00:20:13.034 2.954 - 2.967: 14.1874% ( 1000) 00:20:13.034 2.967 - 2.980: 20.2119% ( 1217) 00:20:13.034 2.980 - 2.994: 26.8700% ( 1345) 00:20:13.034 2.994 - 3.007: 33.9043% ( 1421) 00:20:13.034 3.007 - 3.020: 40.4534% ( 1323) 00:20:13.034 3.020 - 3.034: 47.0224% ( 1327) 00:20:13.034 3.034 - 3.047: 53.8241% ( 1374) 00:20:13.034 3.047 - 3.060: 62.4375% ( 1740) 00:20:13.034 3.060 - 3.074: 71.4470% ( 1820) 00:20:13.034 3.074 - 3.087: 79.1842% ( 1563) 00:20:13.034 3.087 - 3.101: 85.9413% ( 1365) 00:20:13.034 3.101 - 3.114: 91.4410% ( 1111) 00:20:13.034 3.114 - 3.127: 94.9557% ( 710) 00:20:13.034 3.127 - 3.141: 97.2675% ( 467) 00:20:13.034 3.141 - 3.154: 98.5199% ( 253) 00:20:13.034 3.154 - 3.167: 99.1436% ( 126) 00:20:13.034 3.167 - 3.181: 99.4109% ( 54) 00:20:13.034 3.181 - 3.194: 99.5446% ( 27) 00:20:13.034 3.194 - 3.207: 99.5891% ( 9) 00:20:13.034 3.207 - 3.221: 99.5941% ( 1) 00:20:13.034 3.234 - 3.248: 99.6040% ( 2) 00:20:13.034 3.248 - 3.261: 99.6089% ( 1) 00:20:13.034 3.274 - 3.288: 99.6139% ( 1) 00:20:13.034 3.301 - 3.314: 99.6188% ( 1) 00:20:13.034 3.328 - 3.341: 99.6238% ( 1) 00:20:13.034 3.354 - 3.368: 99.6287% ( 1) 00:20:13.034 3.395 - 3.408: 99.6337% ( 1) 00:20:13.034 3.421 - 3.448: 99.6386% ( 1) 00:20:13.034 3.475 - 3.502: 99.6436% ( 1) 00:20:13.034 3.555 - 3.582: 99.6485% ( 1) 00:20:13.034 3.662 - 3.689: 99.6535% ( 1) 00:20:13.034 3.715 - 3.742: 99.6634% ( 2) 00:20:13.034 3.983 - 4.009: 99.6683% ( 1) 00:20:13.034 4.143 - 4.170: 99.6733% ( 1) 00:20:13.034 4.250 - 4.277: 99.6832% ( 2) 00:20:13.034 4.303 - 4.330: 99.6881% ( 1) 00:20:13.034 4.651 - 4.678: 99.6931% ( 1) 00:20:13.034 4.678 - 4.704: 99.6980% ( 1) 00:20:13.034 4.811 - 4.838: 99.7030% ( 1) 00:20:13.034 4.918 - 4.945: 99.7079% ( 1) 00:20:13.034 4.945 - 4.972: 99.7178% ( 2) 00:20:13.034 4.972 - 4.998: 99.7228% ( 1) 00:20:13.034 5.132 - 5.159: 99.7277% ( 1) 00:20:13.034 5.159 - 5.185: 99.7327% ( 1) 00:20:13.034 5.185 - 5.212: 99.7426% ( 2) 00:20:13.034 5.212 - 5.239: 99.7525% ( 2) 00:20:13.034 5.239 - 5.266: 99.7574% ( 1) 00:20:13.034 5.399 - 5.426: 99.7624% ( 1) 00:20:13.034 5.479 - 5.506: 99.7673% ( 1) 00:20:13.034 5.693 - 5.720: 99.7723% ( 1) 00:20:13.034 5.800 - 5.827: 99.7822% ( 2) 00:20:13.034 5.854 - 5.880: 99.7871% ( 1) 00:20:13.034 5.880 - 5.907: 99.7921% ( 1) 00:20:13.034 5.987 - 6.014: 99.7970% ( 1) 00:20:13.035 6.014 - 6.041: 99.8020% ( 1) 00:20:13.035 6.041 - 6.067: 99.8119% ( 2) 00:20:13.035 6.067 - 6.094: 99.8168% ( 1) 00:20:13.035 6.094 - 6.121: 99.8317% ( 3) 00:20:13.035 6.121 - 6.148: 99.8416% ( 2) 00:20:13.035 6.362 - 6.388: 99.8465% ( 1) 00:20:13.035 6.388 - 6.415: 99.8515% ( 1) 00:20:13.035 6.415 - 6.442: 99.8564% ( 1) 00:20:13.035 6.468 - 6.495: 99.8614% ( 1) 00:20:13.035 6.575 - 6.602: 99.8713% ( 2) 00:20:13.035 6.629 - 6.656: 99.8762% ( 1) 00:20:13.035 6.682 - 6.709: 99.8812% ( 1) 00:20:13.035 6.762 - 6.789: 99.8861% ( 1) 00:20:13.035 6.843 - 6.896: 99.9010% ( 3) 00:20:13.035 6.950 - 7.003: 99.9059% ( 1) 00:20:13.035 7.324 - 7.377: 99.9109% ( 1) 00:20:13.035 8.072 - 8.126: 99.9158% ( 1) 00:20:13.035 10.371 - 10.424: 99.9208% ( 1) 00:20:13.035 3996.098 - 4023.468: 100.0000% ( 16) 00:20:13.035 00:20:13.035 [2024-10-13 14:16:16.680754] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:13.035 Complete histogram 00:20:13.035 ================== 00:20:13.035 Range in us Cumulative Count 00:20:13.035 1.624 - 1.630: 0.0050% ( 1) 00:20:13.035 1.637 - 1.644: 0.0248% ( 4) 00:20:13.035 1.644 - 1.651: 0.6584% ( 128) 00:20:13.035 1.651 - 1.657: 0.8415% ( 37) 00:20:13.035 1.657 - 1.664: 0.8910% ( 10) 00:20:13.035 1.664 - 1.671: 1.0000% ( 22) 00:20:13.035 1.671 - 1.677: 1.0445% ( 9) 00:20:13.035 1.677 - 1.684: 1.0544% ( 2) 00:20:13.035 1.697 - 1.704: 1.0742% ( 4) 00:20:13.035 1.704 - 1.711: 8.9005% ( 1581) 00:20:13.035 1.711 - 1.724: 53.1508% ( 8939) 00:20:13.035 1.724 - 1.737: 73.8973% ( 4191) 00:20:13.035 1.737 - 1.751: 81.5801% ( 1552) 00:20:13.035 1.751 - 1.764: 83.2781% ( 343) 00:20:13.035 1.764 - 1.777: 86.0799% ( 566) 00:20:13.035 1.777 - 1.791: 91.7281% ( 1141) 00:20:13.035 1.791 - 1.804: 96.1883% ( 901) 00:20:13.035 1.804 - 1.818: 98.4258% ( 452) 00:20:13.035 1.818 - 1.831: 99.2080% ( 158) 00:20:13.035 1.831 - 1.844: 99.3367% ( 26) 00:20:13.035 1.844 - 1.858: 99.3565% ( 4) 00:20:13.035 1.858 - 1.871: 99.3664% ( 2) 00:20:13.035 2.058 - 2.072: 99.3763% ( 2) 00:20:13.035 3.328 - 3.341: 99.3812% ( 1) 00:20:13.035 3.528 - 3.555: 99.3862% ( 1) 00:20:13.035 3.635 - 3.662: 99.3911% ( 1) 00:20:13.035 3.956 - 3.983: 99.3961% ( 1) 00:20:13.035 4.223 - 4.250: 99.4010% ( 1) 00:20:13.035 4.250 - 4.277: 99.4060% ( 1) 00:20:13.035 4.277 - 4.303: 99.4109% ( 1) 00:20:13.035 4.303 - 4.330: 99.4208% ( 2) 00:20:13.035 4.410 - 4.437: 99.4307% ( 2) 00:20:13.035 4.437 - 4.464: 99.4357% ( 1) 00:20:13.035 4.464 - 4.490: 99.4406% ( 1) 00:20:13.035 4.544 - 4.571: 99.4456% ( 1) 00:20:13.035 4.571 - 4.597: 99.4505% ( 1) 00:20:13.035 4.624 - 4.651: 99.4555% ( 1) 00:20:13.035 4.651 - 4.678: 99.4604% ( 1) 00:20:13.035 4.731 - 4.758: 99.4654% ( 1) 00:20:13.035 4.758 - 4.784: 99.4703% ( 1) 00:20:13.035 4.784 - 4.811: 99.4802% ( 2) 00:20:13.035 4.811 - 4.838: 99.4852% ( 1) 00:20:13.035 4.891 - 4.918: 99.4901% ( 1) 00:20:13.035 4.918 - 4.945: 99.4951% ( 1) 00:20:13.035 4.945 - 4.972: 99.5000% ( 1) 00:20:13.035 4.998 - 5.025: 99.5050% ( 1) 00:20:13.035 5.079 - 5.105: 99.5099% ( 1) 00:20:13.035 5.132 - 5.159: 99.5149% ( 1) 00:20:13.035 5.159 - 5.185: 99.5198% ( 1) 00:20:13.035 5.185 - 5.212: 99.5297% ( 2) 00:20:13.035 5.212 - 5.239: 99.5396% ( 2) 00:20:13.035 5.239 - 5.266: 99.5495% ( 2) 00:20:13.035 5.319 - 5.346: 99.5545% ( 1) 00:20:13.035 5.426 - 5.453: 99.5644% ( 2) 00:20:13.035 5.453 - 5.479: 99.5693% ( 1) 00:20:13.035 5.560 - 5.586: 99.5792% ( 2) 00:20:13.035 5.613 - 5.640: 99.5842% ( 1) 00:20:13.035 5.800 - 5.827: 99.5891% ( 1) 00:20:13.035 10.799 - 10.852: 99.5941% ( 1) 00:20:13.035 10.959 - 11.012: 99.5990% ( 1) 00:20:13.035 12.135 - 12.188: 99.6040% ( 1) 00:20:13.035 33.572 - 33.785: 99.6089% ( 1) 00:20:13.035 118.891 - 119.746: 99.6139% ( 1) 00:20:13.035 3996.098 - 4023.468: 100.0000% ( 78) 00:20:13.035 00:20:13.035 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:20:13.035 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:20:13.035 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:20:13.035 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:20:13.035 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:13.296 [ 00:20:13.296 { 00:20:13.296 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:13.296 "subtype": "Discovery", 00:20:13.296 "listen_addresses": [], 00:20:13.296 "allow_any_host": true, 00:20:13.296 "hosts": [] 00:20:13.296 }, 00:20:13.296 { 00:20:13.296 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:13.296 "subtype": "NVMe", 00:20:13.296 "listen_addresses": [ 00:20:13.296 { 00:20:13.296 "trtype": "VFIOUSER", 00:20:13.296 "adrfam": "IPv4", 00:20:13.296 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:13.296 "trsvcid": "0" 00:20:13.296 } 00:20:13.296 ], 00:20:13.296 "allow_any_host": true, 00:20:13.296 "hosts": [], 00:20:13.296 "serial_number": "SPDK1", 00:20:13.296 "model_number": "SPDK bdev Controller", 00:20:13.296 "max_namespaces": 32, 00:20:13.296 "min_cntlid": 1, 00:20:13.296 "max_cntlid": 65519, 00:20:13.296 "namespaces": [ 00:20:13.296 { 00:20:13.296 "nsid": 1, 00:20:13.296 "bdev_name": "Malloc1", 00:20:13.296 "name": "Malloc1", 00:20:13.296 "nguid": "9EC33F69686E4DA49B26219222D8A45F", 00:20:13.296 "uuid": "9ec33f69-686e-4da4-9b26-219222d8a45f" 00:20:13.296 } 00:20:13.296 ] 00:20:13.296 }, 00:20:13.296 { 00:20:13.296 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:13.296 "subtype": "NVMe", 00:20:13.296 "listen_addresses": [ 00:20:13.296 { 00:20:13.296 "trtype": "VFIOUSER", 00:20:13.296 "adrfam": "IPv4", 00:20:13.296 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:13.296 "trsvcid": "0" 00:20:13.296 } 00:20:13.296 ], 00:20:13.296 "allow_any_host": true, 00:20:13.296 "hosts": [], 00:20:13.296 "serial_number": "SPDK2", 00:20:13.296 "model_number": "SPDK bdev Controller", 00:20:13.296 "max_namespaces": 32, 00:20:13.296 "min_cntlid": 1, 00:20:13.296 "max_cntlid": 65519, 00:20:13.296 "namespaces": [ 00:20:13.296 { 00:20:13.296 "nsid": 1, 00:20:13.296 "bdev_name": "Malloc2", 00:20:13.296 "name": "Malloc2", 00:20:13.296 "nguid": "73D05F29DADA45B68D373431E33B0FFE", 00:20:13.296 "uuid": "73d05f29-dada-45b6-8d37-3431e33b0ffe" 00:20:13.296 } 00:20:13.296 ] 00:20:13.296 } 00:20:13.296 ] 00:20:13.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:13.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1697382 00:20:13.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:13.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:20:13.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:20:13.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:13.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:13.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:20:13.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:13.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:20:13.557 Malloc3 00:20:13.557 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:20:13.557 [2024-10-13 14:16:17.156358] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:13.557 [2024-10-13 14:16:17.246781] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:13.818 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:13.818 Asynchronous Event Request test 00:20:13.818 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:13.818 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:13.818 Registering asynchronous event callbacks... 00:20:13.818 Starting namespace attribute notice tests for all controllers... 00:20:13.818 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:13.818 aer_cb - Changed Namespace 00:20:13.818 Cleaning up... 00:20:13.818 [ 00:20:13.818 { 00:20:13.818 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:13.818 "subtype": "Discovery", 00:20:13.818 "listen_addresses": [], 00:20:13.818 "allow_any_host": true, 00:20:13.818 "hosts": [] 00:20:13.818 }, 00:20:13.818 { 00:20:13.818 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:13.818 "subtype": "NVMe", 00:20:13.818 "listen_addresses": [ 00:20:13.818 { 00:20:13.818 "trtype": "VFIOUSER", 00:20:13.818 "adrfam": "IPv4", 00:20:13.818 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:13.818 "trsvcid": "0" 00:20:13.818 } 00:20:13.818 ], 00:20:13.818 "allow_any_host": true, 00:20:13.818 "hosts": [], 00:20:13.818 "serial_number": "SPDK1", 00:20:13.818 "model_number": "SPDK bdev Controller", 00:20:13.818 "max_namespaces": 32, 00:20:13.818 "min_cntlid": 1, 00:20:13.818 "max_cntlid": 65519, 00:20:13.818 "namespaces": [ 00:20:13.818 { 00:20:13.818 "nsid": 1, 00:20:13.818 "bdev_name": "Malloc1", 00:20:13.818 "name": "Malloc1", 00:20:13.818 "nguid": "9EC33F69686E4DA49B26219222D8A45F", 00:20:13.818 "uuid": "9ec33f69-686e-4da4-9b26-219222d8a45f" 00:20:13.818 }, 00:20:13.818 { 00:20:13.818 "nsid": 2, 00:20:13.818 "bdev_name": "Malloc3", 00:20:13.818 "name": "Malloc3", 00:20:13.818 "nguid": "68D6D566E08C41D6A6C7AF8176697355", 00:20:13.818 "uuid": "68d6d566-e08c-41d6-a6c7-af8176697355" 00:20:13.818 } 00:20:13.818 ] 00:20:13.818 }, 00:20:13.818 { 00:20:13.818 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:13.818 "subtype": "NVMe", 00:20:13.818 "listen_addresses": [ 00:20:13.818 { 00:20:13.818 "trtype": "VFIOUSER", 00:20:13.818 "adrfam": "IPv4", 00:20:13.818 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:13.818 "trsvcid": "0" 00:20:13.818 } 00:20:13.818 ], 00:20:13.818 "allow_any_host": true, 00:20:13.818 "hosts": [], 00:20:13.818 "serial_number": "SPDK2", 00:20:13.818 "model_number": "SPDK bdev Controller", 00:20:13.818 "max_namespaces": 32, 00:20:13.818 "min_cntlid": 1, 00:20:13.818 "max_cntlid": 65519, 00:20:13.818 "namespaces": [ 00:20:13.818 { 00:20:13.818 "nsid": 1, 00:20:13.818 "bdev_name": "Malloc2", 00:20:13.818 "name": "Malloc2", 00:20:13.818 "nguid": "73D05F29DADA45B68D373431E33B0FFE", 00:20:13.818 "uuid": "73d05f29-dada-45b6-8d37-3431e33b0ffe" 00:20:13.818 } 00:20:13.818 ] 00:20:13.818 } 00:20:13.818 ] 00:20:13.818 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1697382 00:20:13.818 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:13.818 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:13.818 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:20:13.818 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:20:13.818 [2024-10-13 14:16:17.479808] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:20:13.818 [2024-10-13 14:16:17.479854] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1697479 ] 00:20:14.082 [2024-10-13 14:16:17.590241] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:14.082 [2024-10-13 14:16:17.605959] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:20:14.082 [2024-10-13 14:16:17.614210] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:14.082 [2024-10-13 14:16:17.614225] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1ff5e69000 00:20:14.082 [2024-10-13 14:16:17.615208] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:14.082 [2024-10-13 14:16:17.616207] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:14.082 [2024-10-13 14:16:17.617215] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:14.082 [2024-10-13 14:16:17.618217] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:14.082 [2024-10-13 14:16:17.619224] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:14.082 [2024-10-13 14:16:17.620233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:14.082 [2024-10-13 14:16:17.621240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:14.082 [2024-10-13 14:16:17.622248] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:14.082 [2024-10-13 14:16:17.623257] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:14.082 [2024-10-13 14:16:17.623264] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1ff4b6c000 00:20:14.082 [2024-10-13 14:16:17.624176] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:14.082 [2024-10-13 14:16:17.633548] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:20:14.082 [2024-10-13 14:16:17.633566] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:20:14.082 [2024-10-13 14:16:17.638619] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:20:14.082 [2024-10-13 14:16:17.638652] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:20:14.082 [2024-10-13 14:16:17.638709] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:20:14.082 [2024-10-13 14:16:17.638721] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:20:14.082 [2024-10-13 14:16:17.638725] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:20:14.082 [2024-10-13 14:16:17.639625] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:20:14.082 [2024-10-13 14:16:17.639631] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:20:14.082 [2024-10-13 14:16:17.639636] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:20:14.082 [2024-10-13 14:16:17.640632] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:20:14.082 [2024-10-13 14:16:17.640638] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:20:14.082 [2024-10-13 14:16:17.640643] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:20:14.082 [2024-10-13 14:16:17.641638] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:20:14.082 [2024-10-13 14:16:17.641644] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:14.082 [2024-10-13 14:16:17.642644] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:20:14.082 [2024-10-13 14:16:17.642650] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:20:14.082 [2024-10-13 14:16:17.642654] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:20:14.082 [2024-10-13 14:16:17.642658] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:14.083 [2024-10-13 14:16:17.642762] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:20:14.083 [2024-10-13 14:16:17.642765] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:14.083 [2024-10-13 14:16:17.642769] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003a0000 00:20:14.083 [2024-10-13 14:16:17.643647] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x20000039e000 00:20:14.083 [2024-10-13 14:16:17.644651] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:20:14.083 [2024-10-13 14:16:17.645653] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:20:14.083 [2024-10-13 14:16:17.646654] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:14.083 [2024-10-13 14:16:17.646684] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:14.083 [2024-10-13 14:16:17.647656] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:20:14.083 [2024-10-13 14:16:17.647662] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:14.083 [2024-10-13 14:16:17.647666] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.647680] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:20:14.083 [2024-10-13 14:16:17.647686] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.647696] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002db000 len:4096 00:20:14.083 [2024-10-13 14:16:17.647699] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002db000 00:20:14.083 [2024-10-13 14:16:17.647702] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:14.083 [2024-10-13 14:16:17.647710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002db000 PRP2 0x0 00:20:14.083 [2024-10-13 14:16:17.655069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:20:14.083 [2024-10-13 14:16:17.655078] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:20:14.083 [2024-10-13 14:16:17.655081] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:20:14.083 [2024-10-13 14:16:17.655084] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:20:14.083 [2024-10-13 14:16:17.655087] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:20:14.083 [2024-10-13 14:16:17.655091] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:20:14.083 [2024-10-13 14:16:17.655094] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:20:14.083 [2024-10-13 14:16:17.655097] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.655103] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.655111] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:20:14.083 [2024-10-13 14:16:17.663069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:20:14.083 [2024-10-13 14:16:17.663078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.083 [2024-10-13 14:16:17.663087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.083 [2024-10-13 14:16:17.663093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.083 [2024-10-13 14:16:17.663099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.083 [2024-10-13 14:16:17.663102] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.663108] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.663115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:20:14.083 [2024-10-13 14:16:17.671067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:20:14.083 [2024-10-13 14:16:17.671073] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:20:14.083 [2024-10-13 14:16:17.671076] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.671081] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.671086] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.671093] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:14.083 [2024-10-13 14:16:17.679067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:20:14.083 [2024-10-13 14:16:17.679115] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.679120] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.679126] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002d9000 len:4096 00:20:14.083 [2024-10-13 14:16:17.679129] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002d9000 00:20:14.083 [2024-10-13 14:16:17.679131] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:14.083 [2024-10-13 14:16:17.679136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002d9000 PRP2 0x0 00:20:14.083 [2024-10-13 14:16:17.687067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:20:14.083 [2024-10-13 14:16:17.687075] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:20:14.083 [2024-10-13 14:16:17.687084] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.687089] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.687094] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002db000 len:4096 00:20:14.083 [2024-10-13 14:16:17.687097] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002db000 00:20:14.083 [2024-10-13 14:16:17.687101] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:14.083 [2024-10-13 14:16:17.687106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002db000 PRP2 0x0 00:20:14.083 [2024-10-13 14:16:17.695068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:20:14.083 [2024-10-13 14:16:17.695079] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.695085] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.695090] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002db000 len:4096 00:20:14.083 [2024-10-13 14:16:17.695093] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002db000 00:20:14.083 [2024-10-13 14:16:17.695095] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:14.083 [2024-10-13 14:16:17.695100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002db000 PRP2 0x0 00:20:14.083 [2024-10-13 14:16:17.703094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:20:14.083 [2024-10-13 14:16:17.703102] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.703107] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.703114] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.703118] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.703122] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.703125] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.703129] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:20:14.083 [2024-10-13 14:16:17.703132] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:20:14.083 [2024-10-13 14:16:17.703136] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:20:14.083 [2024-10-13 14:16:17.703149] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:20:14.083 [2024-10-13 14:16:17.711068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:20:14.083 [2024-10-13 14:16:17.711078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:20:14.083 [2024-10-13 14:16:17.719066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:20:14.083 [2024-10-13 14:16:17.719076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:20:14.083 [2024-10-13 14:16:17.727068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:20:14.083 [2024-10-13 14:16:17.727078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:14.083 [2024-10-13 14:16:17.735067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:20:14.083 [2024-10-13 14:16:17.735078] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002d6000 len:8192 00:20:14.083 [2024-10-13 14:16:17.735082] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002d6000 00:20:14.084 [2024-10-13 14:16:17.735084] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002d7000 00:20:14.084 [2024-10-13 14:16:17.735087] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002d7000 00:20:14.084 [2024-10-13 14:16:17.735089] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:20:14.084 [2024-10-13 14:16:17.735094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002d6000 PRP2 0x2000002d7000 00:20:14.084 [2024-10-13 14:16:17.735099] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002dc000 len:512 00:20:14.084 [2024-10-13 14:16:17.735102] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002dc000 00:20:14.084 [2024-10-13 14:16:17.735104] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:14.084 [2024-10-13 14:16:17.735109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002dc000 PRP2 0x0 00:20:14.084 [2024-10-13 14:16:17.735113] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002db000 len:512 00:20:14.084 [2024-10-13 14:16:17.735117] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002db000 00:20:14.084 [2024-10-13 14:16:17.735119] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:14.084 [2024-10-13 14:16:17.735123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002db000 PRP2 0x0 00:20:14.084 [2024-10-13 14:16:17.735129] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002d4000 len:4096 00:20:14.084 [2024-10-13 14:16:17.735132] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002d4000 00:20:14.084 [2024-10-13 14:16:17.735134] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:14.084 [2024-10-13 14:16:17.735138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002d4000 PRP2 0x0 00:20:14.084 [2024-10-13 14:16:17.743067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:20:14.084 [2024-10-13 14:16:17.743078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:20:14.084 [2024-10-13 14:16:17.743085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:20:14.084 [2024-10-13 14:16:17.743090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:20:14.084 ===================================================== 00:20:14.084 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:14.084 ===================================================== 00:20:14.084 Controller Capabilities/Features 00:20:14.084 ================================ 00:20:14.084 Vendor ID: 4e58 00:20:14.084 Subsystem Vendor ID: 4e58 00:20:14.084 Serial Number: SPDK2 00:20:14.084 Model Number: SPDK bdev Controller 00:20:14.084 Firmware Version: 25.01 00:20:14.084 Recommended Arb Burst: 6 00:20:14.084 IEEE OUI Identifier: 8d 6b 50 00:20:14.084 Multi-path I/O 00:20:14.084 May have multiple subsystem ports: Yes 00:20:14.084 May have multiple controllers: Yes 00:20:14.084 Associated with SR-IOV VF: No 00:20:14.084 Max Data Transfer Size: 131072 00:20:14.084 Max Number of Namespaces: 32 00:20:14.084 Max Number of I/O Queues: 127 00:20:14.084 NVMe Specification Version (VS): 1.3 00:20:14.084 NVMe Specification Version (Identify): 1.3 00:20:14.084 Maximum Queue Entries: 256 00:20:14.084 Contiguous Queues Required: Yes 00:20:14.084 Arbitration Mechanisms Supported 00:20:14.084 Weighted Round Robin: Not Supported 00:20:14.084 Vendor Specific: Not Supported 00:20:14.084 Reset Timeout: 15000 ms 00:20:14.084 Doorbell Stride: 4 bytes 00:20:14.084 NVM Subsystem Reset: Not Supported 00:20:14.084 Command Sets Supported 00:20:14.084 NVM Command Set: Supported 00:20:14.084 Boot Partition: Not Supported 00:20:14.084 Memory Page Size Minimum: 4096 bytes 00:20:14.084 Memory Page Size Maximum: 4096 bytes 00:20:14.084 Persistent Memory Region: Not Supported 00:20:14.084 Optional Asynchronous Events Supported 00:20:14.084 Namespace Attribute Notices: Supported 00:20:14.084 Firmware Activation Notices: Not Supported 00:20:14.084 ANA Change Notices: Not Supported 00:20:14.084 PLE Aggregate Log Change Notices: Not Supported 00:20:14.084 LBA Status Info Alert Notices: Not Supported 00:20:14.084 EGE Aggregate Log Change Notices: Not Supported 00:20:14.084 Normal NVM Subsystem Shutdown event: Not Supported 00:20:14.084 Zone Descriptor Change Notices: Not Supported 00:20:14.084 Discovery Log Change Notices: Not Supported 00:20:14.084 Controller Attributes 00:20:14.084 128-bit Host Identifier: Supported 00:20:14.084 Non-Operational Permissive Mode: Not Supported 00:20:14.084 NVM Sets: Not Supported 00:20:14.084 Read Recovery Levels: Not Supported 00:20:14.084 Endurance Groups: Not Supported 00:20:14.084 Predictable Latency Mode: Not Supported 00:20:14.084 Traffic Based Keep ALive: Not Supported 00:20:14.084 Namespace Granularity: Not Supported 00:20:14.084 SQ Associations: Not Supported 00:20:14.084 UUID List: Not Supported 00:20:14.084 Multi-Domain Subsystem: Not Supported 00:20:14.084 Fixed Capacity Management: Not Supported 00:20:14.084 Variable Capacity Management: Not Supported 00:20:14.084 Delete Endurance Group: Not Supported 00:20:14.084 Delete NVM Set: Not Supported 00:20:14.084 Extended LBA Formats Supported: Not Supported 00:20:14.084 Flexible Data Placement Supported: Not Supported 00:20:14.084 00:20:14.084 Controller Memory Buffer Support 00:20:14.084 ================================ 00:20:14.084 Supported: No 00:20:14.084 00:20:14.084 Persistent Memory Region Support 00:20:14.084 ================================ 00:20:14.084 Supported: No 00:20:14.084 00:20:14.084 Admin Command Set Attributes 00:20:14.084 ============================ 00:20:14.084 Security Send/Receive: Not Supported 00:20:14.084 Format NVM: Not Supported 00:20:14.084 Firmware Activate/Download: Not Supported 00:20:14.084 Namespace Management: Not Supported 00:20:14.084 Device Self-Test: Not Supported 00:20:14.084 Directives: Not Supported 00:20:14.084 NVMe-MI: Not Supported 00:20:14.084 Virtualization Management: Not Supported 00:20:14.084 Doorbell Buffer Config: Not Supported 00:20:14.084 Get LBA Status Capability: Not Supported 00:20:14.084 Command & Feature Lockdown Capability: Not Supported 00:20:14.084 Abort Command Limit: 4 00:20:14.084 Async Event Request Limit: 4 00:20:14.084 Number of Firmware Slots: N/A 00:20:14.084 Firmware Slot 1 Read-Only: N/A 00:20:14.084 Firmware Activation Without Reset: N/A 00:20:14.084 Multiple Update Detection Support: N/A 00:20:14.084 Firmware Update Granularity: No Information Provided 00:20:14.084 Per-Namespace SMART Log: No 00:20:14.084 Asymmetric Namespace Access Log Page: Not Supported 00:20:14.084 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:20:14.084 Command Effects Log Page: Supported 00:20:14.084 Get Log Page Extended Data: Supported 00:20:14.084 Telemetry Log Pages: Not Supported 00:20:14.084 Persistent Event Log Pages: Not Supported 00:20:14.084 Supported Log Pages Log Page: May Support 00:20:14.084 Commands Supported & Effects Log Page: Not Supported 00:20:14.084 Feature Identifiers & Effects Log Page:May Support 00:20:14.084 NVMe-MI Commands & Effects Log Page: May Support 00:20:14.084 Data Area 4 for Telemetry Log: Not Supported 00:20:14.084 Error Log Page Entries Supported: 128 00:20:14.084 Keep Alive: Supported 00:20:14.084 Keep Alive Granularity: 10000 ms 00:20:14.084 00:20:14.084 NVM Command Set Attributes 00:20:14.084 ========================== 00:20:14.084 Submission Queue Entry Size 00:20:14.084 Max: 64 00:20:14.084 Min: 64 00:20:14.084 Completion Queue Entry Size 00:20:14.084 Max: 16 00:20:14.084 Min: 16 00:20:14.084 Number of Namespaces: 32 00:20:14.084 Compare Command: Supported 00:20:14.084 Write Uncorrectable Command: Not Supported 00:20:14.084 Dataset Management Command: Supported 00:20:14.084 Write Zeroes Command: Supported 00:20:14.084 Set Features Save Field: Not Supported 00:20:14.084 Reservations: Not Supported 00:20:14.084 Timestamp: Not Supported 00:20:14.084 Copy: Supported 00:20:14.084 Volatile Write Cache: Present 00:20:14.084 Atomic Write Unit (Normal): 1 00:20:14.084 Atomic Write Unit (PFail): 1 00:20:14.084 Atomic Compare & Write Unit: 1 00:20:14.084 Fused Compare & Write: Supported 00:20:14.084 Scatter-Gather List 00:20:14.084 SGL Command Set: Supported (Dword aligned) 00:20:14.084 SGL Keyed: Not Supported 00:20:14.084 SGL Bit Bucket Descriptor: Not Supported 00:20:14.084 SGL Metadata Pointer: Not Supported 00:20:14.084 Oversized SGL: Not Supported 00:20:14.084 SGL Metadata Address: Not Supported 00:20:14.084 SGL Offset: Not Supported 00:20:14.084 Transport SGL Data Block: Not Supported 00:20:14.084 Replay Protected Memory Block: Not Supported 00:20:14.084 00:20:14.084 Firmware Slot Information 00:20:14.084 ========================= 00:20:14.084 Active slot: 1 00:20:14.084 Slot 1 Firmware Revision: 25.01 00:20:14.084 00:20:14.084 00:20:14.084 Commands Supported and Effects 00:20:14.084 ============================== 00:20:14.084 Admin Commands 00:20:14.084 -------------- 00:20:14.084 Get Log Page (02h): Supported 00:20:14.084 Identify (06h): Supported 00:20:14.084 Abort (08h): Supported 00:20:14.084 Set Features (09h): Supported 00:20:14.084 Get Features (0Ah): Supported 00:20:14.084 Asynchronous Event Request (0Ch): Supported 00:20:14.084 Keep Alive (18h): Supported 00:20:14.084 I/O Commands 00:20:14.084 ------------ 00:20:14.084 Flush (00h): Supported LBA-Change 00:20:14.084 Write (01h): Supported LBA-Change 00:20:14.084 Read (02h): Supported 00:20:14.085 Compare (05h): Supported 00:20:14.085 Write Zeroes (08h): Supported LBA-Change 00:20:14.085 Dataset Management (09h): Supported LBA-Change 00:20:14.085 Copy (19h): Supported LBA-Change 00:20:14.085 00:20:14.085 Error Log 00:20:14.085 ========= 00:20:14.085 00:20:14.085 Arbitration 00:20:14.085 =========== 00:20:14.085 Arbitration Burst: 1 00:20:14.085 00:20:14.085 Power Management 00:20:14.085 ================ 00:20:14.085 Number of Power States: 1 00:20:14.085 Current Power State: Power State #0 00:20:14.085 Power State #0: 00:20:14.085 Max Power: 0.00 W 00:20:14.085 Non-Operational State: Operational 00:20:14.085 Entry Latency: Not Reported 00:20:14.085 Exit Latency: Not Reported 00:20:14.085 Relative Read Throughput: 0 00:20:14.085 Relative Read Latency: 0 00:20:14.085 Relative Write Throughput: 0 00:20:14.085 Relative Write Latency: 0 00:20:14.085 Idle Power: Not Reported 00:20:14.085 Active Power: Not Reported 00:20:14.085 Non-Operational Permissive Mode: Not Supported 00:20:14.085 00:20:14.085 Health Information 00:20:14.085 ================== 00:20:14.085 Critical Warnings: 00:20:14.085 Available Spare Space: OK 00:20:14.085 Temperature: OK 00:20:14.085 Device Reliability: OK 00:20:14.085 Read Only: No 00:20:14.085 Volatile Memory Backup: OK 00:20:14.085 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:14.085 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:14.085 Available Spare: 0% 00:20:14.085 Available Sp[2024-10-13 14:16:17.743161] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:20:14.085 [2024-10-13 14:16:17.751066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:20:14.085 [2024-10-13 14:16:17.751091] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:20:14.085 [2024-10-13 14:16:17.751098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.085 [2024-10-13 14:16:17.751102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.085 [2024-10-13 14:16:17.751108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.085 [2024-10-13 14:16:17.751113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.085 [2024-10-13 14:16:17.751152] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:20:14.085 [2024-10-13 14:16:17.751160] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:20:14.085 [2024-10-13 14:16:17.752157] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:14.085 [2024-10-13 14:16:17.752196] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:20:14.085 [2024-10-13 14:16:17.752201] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:20:14.085 [2024-10-13 14:16:17.753160] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:20:14.085 [2024-10-13 14:16:17.753168] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:20:14.085 [2024-10-13 14:16:17.753209] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:20:14.085 [2024-10-13 14:16:17.754179] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:14.085 are Threshold: 0% 00:20:14.085 Life Percentage Used: 0% 00:20:14.085 Data Units Read: 0 00:20:14.085 Data Units Written: 0 00:20:14.085 Host Read Commands: 0 00:20:14.085 Host Write Commands: 0 00:20:14.085 Controller Busy Time: 0 minutes 00:20:14.085 Power Cycles: 0 00:20:14.085 Power On Hours: 0 hours 00:20:14.085 Unsafe Shutdowns: 0 00:20:14.085 Unrecoverable Media Errors: 0 00:20:14.085 Lifetime Error Log Entries: 0 00:20:14.085 Warning Temperature Time: 0 minutes 00:20:14.085 Critical Temperature Time: 0 minutes 00:20:14.085 00:20:14.085 Number of Queues 00:20:14.085 ================ 00:20:14.085 Number of I/O Submission Queues: 127 00:20:14.085 Number of I/O Completion Queues: 127 00:20:14.085 00:20:14.085 Active Namespaces 00:20:14.085 ================= 00:20:14.085 Namespace ID:1 00:20:14.085 Error Recovery Timeout: Unlimited 00:20:14.085 Command Set Identifier: NVM (00h) 00:20:14.085 Deallocate: Supported 00:20:14.085 Deallocated/Unwritten Error: Not Supported 00:20:14.085 Deallocated Read Value: Unknown 00:20:14.085 Deallocate in Write Zeroes: Not Supported 00:20:14.085 Deallocated Guard Field: 0xFFFF 00:20:14.085 Flush: Supported 00:20:14.085 Reservation: Supported 00:20:14.085 Namespace Sharing Capabilities: Multiple Controllers 00:20:14.085 Size (in LBAs): 131072 (0GiB) 00:20:14.085 Capacity (in LBAs): 131072 (0GiB) 00:20:14.085 Utilization (in LBAs): 131072 (0GiB) 00:20:14.085 NGUID: 73D05F29DADA45B68D373431E33B0FFE 00:20:14.085 UUID: 73d05f29-dada-45b6-8d37-3431e33b0ffe 00:20:14.085 Thin Provisioning: Not Supported 00:20:14.085 Per-NS Atomic Units: Yes 00:20:14.085 Atomic Boundary Size (Normal): 0 00:20:14.085 Atomic Boundary Size (PFail): 0 00:20:14.085 Atomic Boundary Offset: 0 00:20:14.085 Maximum Single Source Range Length: 65535 00:20:14.085 Maximum Copy Length: 65535 00:20:14.085 Maximum Source Range Count: 1 00:20:14.085 NGUID/EUI64 Never Reused: No 00:20:14.085 Namespace Write Protected: No 00:20:14.085 Number of LBA Formats: 1 00:20:14.085 Current LBA Format: LBA Format #00 00:20:14.085 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:14.085 00:20:14.346 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:20:14.346 [2024-10-13 14:16:18.021807] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:19.636 Initializing NVMe Controllers 00:20:19.636 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:19.636 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:19.636 Initialization complete. Launching workers. 00:20:19.636 ======================================================== 00:20:19.636 Latency(us) 00:20:19.637 Device Information : IOPS MiB/s Average min max 00:20:19.637 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39931.85 155.98 3205.33 839.72 7794.15 00:20:19.637 ======================================================== 00:20:19.637 Total : 39931.85 155.98 3205.33 839.72 7794.15 00:20:19.637 00:20:19.637 [2024-10-13 14:16:23.115256] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:19.637 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:20:19.898 [2024-10-13 14:16:23.393492] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:25.188 Initializing NVMe Controllers 00:20:25.188 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:25.188 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:25.188 Initialization complete. Launching workers. 00:20:25.188 ======================================================== 00:20:25.188 Latency(us) 00:20:25.188 Device Information : IOPS MiB/s Average min max 00:20:25.188 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39966.60 156.12 3206.71 850.77 9779.38 00:20:25.188 ======================================================== 00:20:25.188 Total : 39966.60 156.12 3206.71 850.77 9779.38 00:20:25.188 00:20:25.188 [2024-10-13 14:16:28.404495] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:25.188 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:20:25.188 [2024-10-13 14:16:28.695441] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:30.478 [2024-10-13 14:16:33.835153] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:30.478 Initializing NVMe Controllers 00:20:30.478 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:30.478 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:30.478 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:20:30.478 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:20:30.478 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:20:30.478 Initialization complete. Launching workers. 00:20:30.478 Starting thread on core 2 00:20:30.478 Starting thread on core 3 00:20:30.478 Starting thread on core 1 00:20:30.478 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:20:30.478 [2024-10-13 14:16:34.171378] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:33.779 [2024-10-13 14:16:37.223459] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:33.779 Initializing NVMe Controllers 00:20:33.779 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:33.779 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:33.779 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:20:33.779 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:20:33.779 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:20:33.779 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:20:33.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:20:33.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:33.779 Initialization complete. Launching workers. 00:20:33.779 Starting thread on core 1 with urgent priority queue 00:20:33.779 Starting thread on core 2 with urgent priority queue 00:20:33.779 Starting thread on core 3 with urgent priority queue 00:20:33.779 Starting thread on core 0 with urgent priority queue 00:20:33.779 SPDK bdev Controller (SPDK2 ) core 0: 7783.67 IO/s 12.85 secs/100000 ios 00:20:33.779 SPDK bdev Controller (SPDK2 ) core 1: 9658.33 IO/s 10.35 secs/100000 ios 00:20:33.779 SPDK bdev Controller (SPDK2 ) core 2: 17838.00 IO/s 5.61 secs/100000 ios 00:20:33.779 SPDK bdev Controller (SPDK2 ) core 3: 15250.67 IO/s 6.56 secs/100000 ios 00:20:33.779 ======================================================== 00:20:33.779 00:20:33.779 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:34.041 [2024-10-13 14:16:37.552330] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:34.041 Initializing NVMe Controllers 00:20:34.041 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:34.041 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:34.041 Namespace ID: 1 size: 0GB 00:20:34.041 Initialization complete. 00:20:34.041 INFO: using host memory buffer for IO 00:20:34.041 Hello world! 00:20:34.041 [2024-10-13 14:16:37.564392] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:34.041 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:34.302 [2024-10-13 14:16:37.888682] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:35.684 Initializing NVMe Controllers 00:20:35.684 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:35.684 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:35.684 Initialization complete. Launching workers. 00:20:35.684 submit (in ns) avg, min, max = 5768.0, 2869.2, 4007973.6 00:20:35.684 complete (in ns) avg, min, max = 16522.7, 1638.8, 4007776.5 00:20:35.684 00:20:35.684 Submit histogram 00:20:35.684 ================ 00:20:35.684 Range in us Cumulative Count 00:20:35.684 2.860 - 2.873: 0.0098% ( 2) 00:20:35.684 2.873 - 2.887: 0.5027% ( 101) 00:20:35.684 2.887 - 2.900: 2.0500% ( 317) 00:20:35.684 2.900 - 2.913: 4.0707% ( 414) 00:20:35.684 2.913 - 2.927: 7.2042% ( 642) 00:20:35.684 2.927 - 2.940: 11.0553% ( 789) 00:20:35.684 2.940 - 2.954: 16.4145% ( 1098) 00:20:35.685 2.954 - 2.967: 22.6962% ( 1287) 00:20:35.685 2.967 - 2.980: 29.4367% ( 1381) 00:20:35.685 2.980 - 2.994: 35.1767% ( 1176) 00:20:35.685 2.994 - 3.007: 41.0338% ( 1200) 00:20:35.685 3.007 - 3.020: 46.8860% ( 1199) 00:20:35.685 3.020 - 3.034: 53.9535% ( 1448) 00:20:35.685 3.034 - 3.047: 62.6708% ( 1786) 00:20:35.685 3.047 - 3.060: 71.9738% ( 1906) 00:20:35.685 3.060 - 3.074: 79.9297% ( 1630) 00:20:35.685 3.074 - 3.087: 86.9631% ( 1441) 00:20:35.685 3.087 - 3.101: 92.2003% ( 1073) 00:20:35.685 3.101 - 3.114: 95.0312% ( 580) 00:20:35.685 3.114 - 3.127: 96.6859% ( 339) 00:20:35.685 3.127 - 3.141: 97.8866% ( 246) 00:20:35.685 3.141 - 3.154: 98.7261% ( 172) 00:20:35.685 3.154 - 3.167: 99.1751% ( 92) 00:20:35.685 3.167 - 3.181: 99.3362% ( 33) 00:20:35.685 3.181 - 3.194: 99.3948% ( 12) 00:20:35.685 3.194 - 3.207: 99.4241% ( 6) 00:20:35.685 3.207 - 3.221: 99.4289% ( 1) 00:20:35.685 3.221 - 3.234: 99.4338% ( 1) 00:20:35.685 3.234 - 3.248: 99.4387% ( 1) 00:20:35.685 3.301 - 3.314: 99.4436% ( 1) 00:20:35.685 3.381 - 3.395: 99.4485% ( 1) 00:20:35.685 3.448 - 3.475: 99.4533% ( 1) 00:20:35.685 3.475 - 3.502: 99.4582% ( 1) 00:20:35.685 3.555 - 3.582: 99.4631% ( 1) 00:20:35.685 3.582 - 3.608: 99.4729% ( 2) 00:20:35.685 3.608 - 3.635: 99.4777% ( 1) 00:20:35.685 3.635 - 3.662: 99.4826% ( 1) 00:20:35.685 3.662 - 3.689: 99.4875% ( 1) 00:20:35.685 3.689 - 3.715: 99.4973% ( 2) 00:20:35.685 3.769 - 3.796: 99.5070% ( 2) 00:20:35.685 3.929 - 3.956: 99.5168% ( 2) 00:20:35.685 4.116 - 4.143: 99.5266% ( 2) 00:20:35.685 4.170 - 4.196: 99.5314% ( 1) 00:20:35.685 4.250 - 4.277: 99.5363% ( 1) 00:20:35.685 4.437 - 4.464: 99.5412% ( 1) 00:20:35.685 4.464 - 4.490: 99.5558% ( 3) 00:20:35.685 4.490 - 4.517: 99.5607% ( 1) 00:20:35.685 4.678 - 4.704: 99.5656% ( 1) 00:20:35.685 4.758 - 4.784: 99.5705% ( 1) 00:20:35.685 4.784 - 4.811: 99.5802% ( 2) 00:20:35.685 4.811 - 4.838: 99.5851% ( 1) 00:20:35.685 4.838 - 4.865: 99.5900% ( 1) 00:20:35.685 4.918 - 4.945: 99.5949% ( 1) 00:20:35.685 4.945 - 4.972: 99.5998% ( 1) 00:20:35.685 4.998 - 5.025: 99.6046% ( 1) 00:20:35.685 5.025 - 5.052: 99.6144% ( 2) 00:20:35.685 5.132 - 5.159: 99.6193% ( 1) 00:20:35.685 5.159 - 5.185: 99.6291% ( 2) 00:20:35.685 5.239 - 5.266: 99.6339% ( 1) 00:20:35.685 5.292 - 5.319: 99.6388% ( 1) 00:20:35.685 5.319 - 5.346: 99.6486% ( 2) 00:20:35.685 5.373 - 5.399: 99.6535% ( 1) 00:20:35.685 5.453 - 5.479: 99.6583% ( 1) 00:20:35.685 5.506 - 5.533: 99.6681% ( 2) 00:20:35.685 5.613 - 5.640: 99.6730% ( 1) 00:20:35.685 5.693 - 5.720: 99.6779% ( 1) 00:20:35.685 5.720 - 5.747: 99.6827% ( 1) 00:20:35.685 5.773 - 5.800: 99.6876% ( 1) 00:20:35.685 5.800 - 5.827: 99.6925% ( 1) 00:20:35.685 5.854 - 5.880: 99.7071% ( 3) 00:20:35.685 5.907 - 5.934: 99.7218% ( 3) 00:20:35.685 5.934 - 5.961: 99.7267% ( 1) 00:20:35.685 5.961 - 5.987: 99.7364% ( 2) 00:20:35.685 5.987 - 6.014: 99.7413% ( 1) 00:20:35.685 6.041 - 6.067: 99.7511% ( 2) 00:20:35.685 6.067 - 6.094: 99.7608% ( 2) 00:20:35.685 6.094 - 6.121: 99.7755% ( 3) 00:20:35.685 6.121 - 6.148: 99.7852% ( 2) 00:20:35.685 6.148 - 6.174: 99.7950% ( 2) 00:20:35.685 [2024-10-13 14:16:38.985582] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:35.685 6.201 - 6.228: 99.7999% ( 1) 00:20:35.685 6.228 - 6.255: 99.8048% ( 1) 00:20:35.685 6.335 - 6.362: 99.8145% ( 2) 00:20:35.685 6.388 - 6.415: 99.8194% ( 1) 00:20:35.685 6.442 - 6.468: 99.8292% ( 2) 00:20:35.685 6.549 - 6.575: 99.8389% ( 2) 00:20:35.685 6.575 - 6.602: 99.8487% ( 2) 00:20:35.685 6.602 - 6.629: 99.8536% ( 1) 00:20:35.685 6.656 - 6.682: 99.8585% ( 1) 00:20:35.685 6.682 - 6.709: 99.8633% ( 1) 00:20:35.685 6.709 - 6.736: 99.8682% ( 1) 00:20:35.685 6.736 - 6.762: 99.8780% ( 2) 00:20:35.685 6.896 - 6.950: 99.8829% ( 1) 00:20:35.685 6.950 - 7.003: 99.8877% ( 1) 00:20:35.685 7.003 - 7.056: 99.8926% ( 1) 00:20:35.685 7.217 - 7.270: 99.8975% ( 1) 00:20:35.685 7.751 - 7.805: 99.9073% ( 2) 00:20:35.685 8.553 - 8.607: 99.9121% ( 1) 00:20:35.685 10.478 - 10.531: 99.9170% ( 1) 00:20:35.685 10.585 - 10.638: 99.9219% ( 1) 00:20:35.685 11.226 - 11.280: 99.9268% ( 1) 00:20:35.685 14.434 - 14.541: 99.9317% ( 1) 00:20:35.685 3996.098 - 4023.468: 100.0000% ( 14) 00:20:35.685 00:20:35.685 Complete histogram 00:20:35.685 ================== 00:20:35.685 Range in us Cumulative Count 00:20:35.685 1.637 - 1.644: 0.0098% ( 2) 00:20:35.685 1.644 - 1.651: 0.0439% ( 7) 00:20:35.685 1.651 - 1.657: 0.7809% ( 151) 00:20:35.685 1.657 - 1.664: 0.9713% ( 39) 00:20:35.685 1.664 - 1.671: 1.0689% ( 20) 00:20:35.685 1.671 - 1.677: 1.1665% ( 20) 00:20:35.685 1.677 - 1.684: 1.2056% ( 8) 00:20:35.685 1.684 - 1.691: 4.0267% ( 578) 00:20:35.685 1.691 - 1.697: 47.0080% ( 8806) 00:20:35.685 1.697 - 1.704: 54.4611% ( 1527) 00:20:35.685 1.704 - 1.711: 65.5213% ( 2266) 00:20:35.685 1.711 - 1.724: 79.3635% ( 2836) 00:20:35.685 1.724 - 1.737: 83.4196% ( 831) 00:20:35.685 1.737 - 1.751: 84.5861% ( 239) 00:20:35.685 1.751 - 1.764: 88.1296% ( 726) 00:20:35.685 1.764 - 1.777: 93.7232% ( 1146) 00:20:35.685 1.777 - 1.791: 97.4278% ( 759) 00:20:35.685 1.791 - 1.804: 98.8481% ( 291) 00:20:35.685 1.804 - 1.818: 99.3216% ( 97) 00:20:35.685 1.818 - 1.831: 99.4289% ( 22) 00:20:35.685 1.831 - 1.844: 99.4338% ( 1) 00:20:35.685 1.844 - 1.858: 99.4387% ( 1) 00:20:35.685 1.965 - 1.978: 99.4485% ( 2) 00:20:35.685 2.018 - 2.031: 99.4533% ( 1) 00:20:35.685 2.058 - 2.072: 99.4582% ( 1) 00:20:35.685 2.072 - 2.085: 99.4631% ( 1) 00:20:35.685 2.085 - 2.098: 99.4680% ( 1) 00:20:35.685 2.098 - 2.112: 99.4777% ( 2) 00:20:35.685 2.112 - 2.125: 99.4826% ( 1) 00:20:35.685 2.138 - 2.152: 99.4875% ( 1) 00:20:35.685 3.274 - 3.288: 99.4924% ( 1) 00:20:35.685 3.608 - 3.635: 99.4973% ( 1) 00:20:35.685 3.769 - 3.796: 99.5021% ( 1) 00:20:35.685 3.796 - 3.822: 99.5070% ( 1) 00:20:35.685 3.902 - 3.929: 99.5119% ( 1) 00:20:35.685 3.929 - 3.956: 99.5168% ( 1) 00:20:35.685 4.063 - 4.090: 99.5217% ( 1) 00:20:35.685 4.384 - 4.410: 99.5266% ( 1) 00:20:35.685 4.410 - 4.437: 99.5314% ( 1) 00:20:35.685 4.437 - 4.464: 99.5363% ( 1) 00:20:35.685 4.464 - 4.490: 99.5412% ( 1) 00:20:35.685 4.490 - 4.517: 99.5461% ( 1) 00:20:35.685 4.544 - 4.571: 99.5510% ( 1) 00:20:35.685 4.597 - 4.624: 99.5558% ( 1) 00:20:35.685 4.651 - 4.678: 99.5607% ( 1) 00:20:35.685 4.758 - 4.784: 99.5656% ( 1) 00:20:35.685 4.918 - 4.945: 99.5754% ( 2) 00:20:35.685 4.998 - 5.025: 99.5851% ( 2) 00:20:35.685 5.132 - 5.159: 99.5900% ( 1) 00:20:35.685 5.239 - 5.266: 99.5949% ( 1) 00:20:35.685 5.292 - 5.319: 99.5998% ( 1) 00:20:35.685 5.319 - 5.346: 99.6046% ( 1) 00:20:35.685 5.800 - 5.827: 99.6095% ( 1) 00:20:35.685 5.987 - 6.014: 99.6144% ( 1) 00:20:35.685 6.014 - 6.041: 99.6193% ( 1) 00:20:35.685 6.736 - 6.762: 99.6242% ( 1) 00:20:35.685 126.589 - 127.444: 99.6291% ( 1) 00:20:35.685 3065.499 - 3079.185: 99.6339% ( 1) 00:20:35.685 3886.615 - 3913.986: 99.6388% ( 1) 00:20:35.685 3996.098 - 4023.468: 100.0000% ( 74) 00:20:35.685 00:20:35.685 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:20:35.685 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:35.685 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:20:35.685 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:20:35.685 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:35.685 [ 00:20:35.685 { 00:20:35.685 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:35.685 "subtype": "Discovery", 00:20:35.685 "listen_addresses": [], 00:20:35.685 "allow_any_host": true, 00:20:35.685 "hosts": [] 00:20:35.685 }, 00:20:35.685 { 00:20:35.685 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:35.685 "subtype": "NVMe", 00:20:35.685 "listen_addresses": [ 00:20:35.685 { 00:20:35.685 "trtype": "VFIOUSER", 00:20:35.685 "adrfam": "IPv4", 00:20:35.685 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:35.685 "trsvcid": "0" 00:20:35.685 } 00:20:35.685 ], 00:20:35.685 "allow_any_host": true, 00:20:35.685 "hosts": [], 00:20:35.685 "serial_number": "SPDK1", 00:20:35.685 "model_number": "SPDK bdev Controller", 00:20:35.685 "max_namespaces": 32, 00:20:35.685 "min_cntlid": 1, 00:20:35.685 "max_cntlid": 65519, 00:20:35.685 "namespaces": [ 00:20:35.685 { 00:20:35.685 "nsid": 1, 00:20:35.685 "bdev_name": "Malloc1", 00:20:35.685 "name": "Malloc1", 00:20:35.685 "nguid": "9EC33F69686E4DA49B26219222D8A45F", 00:20:35.685 "uuid": "9ec33f69-686e-4da4-9b26-219222d8a45f" 00:20:35.685 }, 00:20:35.685 { 00:20:35.685 "nsid": 2, 00:20:35.685 "bdev_name": "Malloc3", 00:20:35.686 "name": "Malloc3", 00:20:35.686 "nguid": "68D6D566E08C41D6A6C7AF8176697355", 00:20:35.686 "uuid": "68d6d566-e08c-41d6-a6c7-af8176697355" 00:20:35.686 } 00:20:35.686 ] 00:20:35.686 }, 00:20:35.686 { 00:20:35.686 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:35.686 "subtype": "NVMe", 00:20:35.686 "listen_addresses": [ 00:20:35.686 { 00:20:35.686 "trtype": "VFIOUSER", 00:20:35.686 "adrfam": "IPv4", 00:20:35.686 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:35.686 "trsvcid": "0" 00:20:35.686 } 00:20:35.686 ], 00:20:35.686 "allow_any_host": true, 00:20:35.686 "hosts": [], 00:20:35.686 "serial_number": "SPDK2", 00:20:35.686 "model_number": "SPDK bdev Controller", 00:20:35.686 "max_namespaces": 32, 00:20:35.686 "min_cntlid": 1, 00:20:35.686 "max_cntlid": 65519, 00:20:35.686 "namespaces": [ 00:20:35.686 { 00:20:35.686 "nsid": 1, 00:20:35.686 "bdev_name": "Malloc2", 00:20:35.686 "name": "Malloc2", 00:20:35.686 "nguid": "73D05F29DADA45B68D373431E33B0FFE", 00:20:35.686 "uuid": "73d05f29-dada-45b6-8d37-3431e33b0ffe" 00:20:35.686 } 00:20:35.686 ] 00:20:35.686 } 00:20:35.686 ] 00:20:35.686 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:35.686 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1701574 00:20:35.686 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:35.686 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:20:35.686 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:20:35.686 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:35.686 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:35.686 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:20:35.686 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:35.686 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:20:35.946 Malloc4 00:20:35.947 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:20:35.947 [2024-10-13 14:16:39.453199] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:35.947 [2024-10-13 14:16:39.569733] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:35.947 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:35.947 Asynchronous Event Request test 00:20:35.947 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:35.947 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:35.947 Registering asynchronous event callbacks... 00:20:35.947 Starting namespace attribute notice tests for all controllers... 00:20:35.947 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:35.947 aer_cb - Changed Namespace 00:20:35.947 Cleaning up... 00:20:36.207 [ 00:20:36.207 { 00:20:36.207 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:36.207 "subtype": "Discovery", 00:20:36.207 "listen_addresses": [], 00:20:36.207 "allow_any_host": true, 00:20:36.207 "hosts": [] 00:20:36.207 }, 00:20:36.207 { 00:20:36.207 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:36.207 "subtype": "NVMe", 00:20:36.207 "listen_addresses": [ 00:20:36.207 { 00:20:36.207 "trtype": "VFIOUSER", 00:20:36.207 "adrfam": "IPv4", 00:20:36.207 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:36.207 "trsvcid": "0" 00:20:36.207 } 00:20:36.207 ], 00:20:36.207 "allow_any_host": true, 00:20:36.207 "hosts": [], 00:20:36.207 "serial_number": "SPDK1", 00:20:36.207 "model_number": "SPDK bdev Controller", 00:20:36.207 "max_namespaces": 32, 00:20:36.207 "min_cntlid": 1, 00:20:36.207 "max_cntlid": 65519, 00:20:36.207 "namespaces": [ 00:20:36.207 { 00:20:36.207 "nsid": 1, 00:20:36.207 "bdev_name": "Malloc1", 00:20:36.207 "name": "Malloc1", 00:20:36.207 "nguid": "9EC33F69686E4DA49B26219222D8A45F", 00:20:36.207 "uuid": "9ec33f69-686e-4da4-9b26-219222d8a45f" 00:20:36.207 }, 00:20:36.207 { 00:20:36.207 "nsid": 2, 00:20:36.207 "bdev_name": "Malloc3", 00:20:36.207 "name": "Malloc3", 00:20:36.207 "nguid": "68D6D566E08C41D6A6C7AF8176697355", 00:20:36.207 "uuid": "68d6d566-e08c-41d6-a6c7-af8176697355" 00:20:36.207 } 00:20:36.207 ] 00:20:36.207 }, 00:20:36.207 { 00:20:36.207 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:36.207 "subtype": "NVMe", 00:20:36.207 "listen_addresses": [ 00:20:36.207 { 00:20:36.207 "trtype": "VFIOUSER", 00:20:36.207 "adrfam": "IPv4", 00:20:36.207 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:36.208 "trsvcid": "0" 00:20:36.208 } 00:20:36.208 ], 00:20:36.208 "allow_any_host": true, 00:20:36.208 "hosts": [], 00:20:36.208 "serial_number": "SPDK2", 00:20:36.208 "model_number": "SPDK bdev Controller", 00:20:36.208 "max_namespaces": 32, 00:20:36.208 "min_cntlid": 1, 00:20:36.208 "max_cntlid": 65519, 00:20:36.208 "namespaces": [ 00:20:36.208 { 00:20:36.208 "nsid": 1, 00:20:36.208 "bdev_name": "Malloc2", 00:20:36.208 "name": "Malloc2", 00:20:36.208 "nguid": "73D05F29DADA45B68D373431E33B0FFE", 00:20:36.208 "uuid": "73d05f29-dada-45b6-8d37-3431e33b0ffe" 00:20:36.208 }, 00:20:36.208 { 00:20:36.208 "nsid": 2, 00:20:36.208 "bdev_name": "Malloc4", 00:20:36.208 "name": "Malloc4", 00:20:36.208 "nguid": "950A37373A884445886452236F28735A", 00:20:36.208 "uuid": "950a3737-3a88-4445-8864-52236f28735a" 00:20:36.208 } 00:20:36.208 ] 00:20:36.208 } 00:20:36.208 ] 00:20:36.208 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1701574 00:20:36.208 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:20:36.208 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1692419 00:20:36.208 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1692419 ']' 00:20:36.208 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1692419 00:20:36.208 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:20:36.208 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:36.208 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1692419 00:20:36.208 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:36.208 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:36.208 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1692419' 00:20:36.208 killing process with pid 1692419 00:20:36.208 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1692419 00:20:36.208 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1692419 00:20:36.469 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:36.469 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:36.469 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:20:36.469 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:20:36.469 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:20:36.469 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1701839 00:20:36.469 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1701839' 00:20:36.469 Process pid: 1701839 00:20:36.469 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:36.469 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:20:36.469 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1701839 00:20:36.469 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1701839 ']' 00:20:36.469 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.469 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:36.469 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.469 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:36.469 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:36.469 [2024-10-13 14:16:40.045864] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:20:36.469 [2024-10-13 14:16:40.046831] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:20:36.469 [2024-10-13 14:16:40.046876] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.730 [2024-10-13 14:16:40.180591] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:36.730 [2024-10-13 14:16:40.228586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:36.730 [2024-10-13 14:16:40.245291] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.730 [2024-10-13 14:16:40.245320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.730 [2024-10-13 14:16:40.245326] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.730 [2024-10-13 14:16:40.245331] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.730 [2024-10-13 14:16:40.245335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.730 [2024-10-13 14:16:40.246710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.730 [2024-10-13 14:16:40.246862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.730 [2024-10-13 14:16:40.246993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.730 [2024-10-13 14:16:40.246995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.730 [2024-10-13 14:16:40.292673] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:20:36.730 [2024-10-13 14:16:40.293535] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:20:36.730 [2024-10-13 14:16:40.295210] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:20:36.730 [2024-10-13 14:16:40.295268] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:20:36.730 [2024-10-13 14:16:40.295304] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:20:37.302 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:37.302 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:20:37.302 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:38.244 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:20:38.504 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:38.504 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:38.504 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:38.504 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:38.504 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:38.765 Malloc1 00:20:38.765 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:38.765 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:39.025 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:39.286 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:39.286 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:39.286 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:39.286 Malloc2 00:20:39.546 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:39.546 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:39.806 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:40.067 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:20:40.067 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1701839 00:20:40.067 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1701839 ']' 00:20:40.067 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1701839 00:20:40.067 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:20:40.067 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:40.067 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1701839 00:20:40.067 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:40.067 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:40.067 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1701839' 00:20:40.067 killing process with pid 1701839 00:20:40.067 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1701839 00:20:40.067 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1701839 00:20:40.067 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:40.067 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:40.067 00:20:40.067 real 0m52.105s 00:20:40.067 user 3m19.629s 00:20:40.067 sys 0m2.737s 00:20:40.067 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:40.067 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:40.067 ************************************ 00:20:40.067 END TEST nvmf_vfio_user 00:20:40.067 ************************************ 00:20:40.329 14:16:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:40.329 14:16:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:40.329 14:16:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:40.329 14:16:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:40.329 ************************************ 00:20:40.329 START TEST nvmf_vfio_user_nvme_compliance 00:20:40.329 ************************************ 00:20:40.329 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:40.329 * Looking for test storage... 00:20:40.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:20:40.329 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:40.329 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:20:40.329 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:40.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.329 --rc genhtml_branch_coverage=1 00:20:40.329 --rc genhtml_function_coverage=1 00:20:40.329 --rc genhtml_legend=1 00:20:40.329 --rc geninfo_all_blocks=1 00:20:40.329 --rc geninfo_unexecuted_blocks=1 00:20:40.329 00:20:40.329 ' 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:40.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.329 --rc genhtml_branch_coverage=1 00:20:40.329 --rc genhtml_function_coverage=1 00:20:40.329 --rc genhtml_legend=1 00:20:40.329 --rc geninfo_all_blocks=1 00:20:40.329 --rc geninfo_unexecuted_blocks=1 00:20:40.329 00:20:40.329 ' 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:40.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.329 --rc genhtml_branch_coverage=1 00:20:40.329 --rc genhtml_function_coverage=1 00:20:40.329 --rc genhtml_legend=1 00:20:40.329 --rc geninfo_all_blocks=1 00:20:40.329 --rc geninfo_unexecuted_blocks=1 00:20:40.329 00:20:40.329 ' 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:40.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.329 --rc genhtml_branch_coverage=1 00:20:40.329 --rc genhtml_function_coverage=1 00:20:40.329 --rc genhtml_legend=1 00:20:40.329 --rc geninfo_all_blocks=1 00:20:40.329 --rc geninfo_unexecuted_blocks=1 00:20:40.329 00:20:40.329 ' 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.329 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.590 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:40.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:40.591 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:40.591 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:40.591 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:40.591 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:40.591 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:40.591 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:20:40.591 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:20:40.591 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:20:40.591 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1702598 00:20:40.591 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1702598' 00:20:40.591 Process pid: 1702598 00:20:40.591 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:40.591 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:40.591 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1702598 00:20:40.591 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1702598 ']' 00:20:40.591 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.591 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:40.591 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.591 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:40.591 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:40.591 [2024-10-13 14:16:44.117542] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:20:40.591 [2024-10-13 14:16:44.117605] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.591 [2024-10-13 14:16:44.252548] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:40.851 [2024-10-13 14:16:44.299006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:40.851 [2024-10-13 14:16:44.315702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.851 [2024-10-13 14:16:44.315735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.851 [2024-10-13 14:16:44.315742] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.851 [2024-10-13 14:16:44.315746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.851 [2024-10-13 14:16:44.315750] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.851 [2024-10-13 14:16:44.317051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.851 [2024-10-13 14:16:44.317204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.851 [2024-10-13 14:16:44.317308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.420 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:41.420 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:20:41.420 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:42.361 malloc0 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.361 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:20:42.620 00:20:42.620 00:20:42.620 CUnit - A unit testing framework for C - Version 2.1-3 00:20:42.620 http://cunit.sourceforge.net/ 00:20:42.620 00:20:42.620 00:20:42.620 Suite: nvme_compliance 00:20:42.620 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-13 14:16:46.241341] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:42.620 [2024-10-13 14:16:46.242621] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:20:42.620 [2024-10-13 14:16:46.242632] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:20:42.620 [2024-10-13 14:16:46.242637] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:20:42.620 [2024-10-13 14:16:46.244351] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:42.620 passed 00:20:42.620 Test: admin_identify_ctrlr_verify_fused ...[2024-10-13 14:16:46.320681] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:42.620 [2024-10-13 14:16:46.324699] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:42.880 passed 00:20:42.880 Test: admin_identify_ns ...[2024-10-13 14:16:46.401055] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:42.880 [2024-10-13 14:16:46.463076] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:20:42.880 [2024-10-13 14:16:46.471073] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:20:42.880 [2024-10-13 14:16:46.492180] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:42.880 passed 00:20:42.880 Test: admin_get_features_mandatory_features ...[2024-10-13 14:16:46.565272] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:42.880 [2024-10-13 14:16:46.568282] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:43.139 passed 00:20:43.139 Test: admin_get_features_optional_features ...[2024-10-13 14:16:46.644552] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:43.139 [2024-10-13 14:16:46.647566] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:43.139 passed 00:20:43.139 Test: admin_set_features_number_of_queues ...[2024-10-13 14:16:46.724154] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:43.139 [2024-10-13 14:16:46.831153] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:43.399 passed 00:20:43.399 Test: admin_get_log_page_mandatory_logs ...[2024-10-13 14:16:46.904276] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:43.399 [2024-10-13 14:16:46.907290] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:43.399 passed 00:20:43.400 Test: admin_get_log_page_with_lpo ...[2024-10-13 14:16:46.982850] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:43.400 [2024-10-13 14:16:47.051079] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:20:43.400 [2024-10-13 14:16:47.064101] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:43.400 passed 00:20:43.659 Test: fabric_property_get ...[2024-10-13 14:16:47.138169] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:43.659 [2024-10-13 14:16:47.139361] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:20:43.659 [2024-10-13 14:16:47.141184] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:43.659 passed 00:20:43.659 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-13 14:16:47.216452] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:43.659 [2024-10-13 14:16:47.217648] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:20:43.659 [2024-10-13 14:16:47.219463] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:43.659 passed 00:20:43.659 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-13 14:16:47.296015] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:43.918 [2024-10-13 14:16:47.380068] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:43.918 [2024-10-13 14:16:47.396069] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:43.918 [2024-10-13 14:16:47.402132] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:43.918 passed 00:20:43.918 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-13 14:16:47.474206] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:43.918 [2024-10-13 14:16:47.475407] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:20:43.918 [2024-10-13 14:16:47.477213] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:43.918 passed 00:20:43.918 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-13 14:16:47.553786] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:44.177 [2024-10-13 14:16:47.629072] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:44.178 [2024-10-13 14:16:47.653068] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:44.178 [2024-10-13 14:16:47.659140] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:44.178 passed 00:20:44.178 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-13 14:16:47.731201] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:44.178 [2024-10-13 14:16:47.732401] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:20:44.178 [2024-10-13 14:16:47.732418] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:20:44.178 [2024-10-13 14:16:47.734210] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:44.178 passed 00:20:44.178 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-13 14:16:47.810786] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:44.441 [2024-10-13 14:16:47.902072] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:20:44.441 [2024-10-13 14:16:47.910073] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:20:44.441 [2024-10-13 14:16:47.918068] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:20:44.441 [2024-10-13 14:16:47.926068] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:20:44.441 [2024-10-13 14:16:47.956143] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:44.441 passed 00:20:44.441 Test: admin_create_io_sq_verify_pc ...[2024-10-13 14:16:48.032243] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:44.441 [2024-10-13 14:16:48.057074] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:20:44.441 [2024-10-13 14:16:48.075538] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:44.441 passed 00:20:44.706 Test: admin_create_io_qp_max_qps ...[2024-10-13 14:16:48.150803] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:45.643 [2024-10-13 14:16:49.251071] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:20:46.211 [2024-10-13 14:16:49.629178] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:46.211 passed 00:20:46.211 Test: admin_create_io_sq_shared_cq ...[2024-10-13 14:16:49.704812] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:46.211 [2024-10-13 14:16:49.837073] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:46.211 [2024-10-13 14:16:49.874103] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:46.211 passed 00:20:46.211 00:20:46.211 Run Summary: Type Total Ran Passed Failed Inactive 00:20:46.211 suites 1 1 n/a 0 0 00:20:46.211 tests 18 18 18 0 0 00:20:46.211 asserts 360 360 360 0 n/a 00:20:46.211 00:20:46.211 Elapsed time = 1.490 seconds 00:20:46.211 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1702598 00:20:46.211 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1702598 ']' 00:20:46.211 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1702598 00:20:46.211 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:20:46.471 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:46.471 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1702598 00:20:46.471 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:46.471 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:46.471 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1702598' 00:20:46.471 killing process with pid 1702598 00:20:46.471 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1702598 00:20:46.471 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1702598 00:20:46.471 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:20:46.471 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:46.471 00:20:46.471 real 0m6.266s 00:20:46.471 user 0m17.521s 00:20:46.471 sys 0m0.527s 00:20:46.471 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:46.471 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:46.471 ************************************ 00:20:46.471 END TEST nvmf_vfio_user_nvme_compliance 00:20:46.471 ************************************ 00:20:46.471 14:16:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:46.471 14:16:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:46.471 14:16:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:46.471 14:16:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:46.471 ************************************ 00:20:46.471 START TEST nvmf_vfio_user_fuzz 00:20:46.471 ************************************ 00:20:46.471 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:46.732 * Looking for test storage... 00:20:46.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:46.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.733 --rc genhtml_branch_coverage=1 00:20:46.733 --rc genhtml_function_coverage=1 00:20:46.733 --rc genhtml_legend=1 00:20:46.733 --rc geninfo_all_blocks=1 00:20:46.733 --rc geninfo_unexecuted_blocks=1 00:20:46.733 00:20:46.733 ' 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:46.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.733 --rc genhtml_branch_coverage=1 00:20:46.733 --rc genhtml_function_coverage=1 00:20:46.733 --rc genhtml_legend=1 00:20:46.733 --rc geninfo_all_blocks=1 00:20:46.733 --rc geninfo_unexecuted_blocks=1 00:20:46.733 00:20:46.733 ' 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:46.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.733 --rc genhtml_branch_coverage=1 00:20:46.733 --rc genhtml_function_coverage=1 00:20:46.733 --rc genhtml_legend=1 00:20:46.733 --rc geninfo_all_blocks=1 00:20:46.733 --rc geninfo_unexecuted_blocks=1 00:20:46.733 00:20:46.733 ' 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:46.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.733 --rc genhtml_branch_coverage=1 00:20:46.733 --rc genhtml_function_coverage=1 00:20:46.733 --rc genhtml_legend=1 00:20:46.733 --rc geninfo_all_blocks=1 00:20:46.733 --rc geninfo_unexecuted_blocks=1 00:20:46.733 00:20:46.733 ' 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.733 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:46.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1703998 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1703998' 00:20:46.734 Process pid: 1703998 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1703998 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1703998 ']' 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:46.734 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:47.675 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:47.675 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:20:47.675 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:48.617 malloc0 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.617 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:48.878 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.878 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:20:48.878 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:21:20.991 Fuzzing completed. Shutting down the fuzz application 00:21:20.991 00:21:20.991 Dumping successful admin opcodes: 00:21:20.991 8, 9, 10, 24, 00:21:20.991 Dumping successful io opcodes: 00:21:20.991 0, 00:21:20.991 NS: 0x20000081ef00 I/O qp, Total commands completed: 1382774, total successful commands: 5427, random_seed: 3496887232 00:21:20.991 NS: 0x20000081ef00 admin qp, Total commands completed: 342554, total successful commands: 2763, random_seed: 4277820608 00:21:20.991 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:21:20.991 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.991 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:20.991 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.991 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1703998 00:21:20.991 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1703998 ']' 00:21:20.991 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1703998 00:21:20.991 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:21:20.991 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:20.991 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1703998 00:21:20.991 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:20.991 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:20.991 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1703998' 00:21:20.991 killing process with pid 1703998 00:21:20.991 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1703998 00:21:20.991 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1703998 00:21:20.991 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:21:20.991 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:21:20.991 00:21:20.991 real 0m32.806s 00:21:20.991 user 0m37.587s 00:21:20.991 sys 0m24.079s 00:21:20.991 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:20.991 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:20.991 ************************************ 00:21:20.991 END TEST nvmf_vfio_user_fuzz 00:21:20.991 ************************************ 00:21:20.991 14:17:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:20.991 14:17:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:20.991 14:17:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:20.991 14:17:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:20.991 ************************************ 00:21:20.991 START TEST nvmf_auth_target 00:21:20.991 ************************************ 00:21:20.991 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:20.991 * Looking for test storage... 00:21:20.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:20.991 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:20.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.992 --rc genhtml_branch_coverage=1 00:21:20.992 --rc genhtml_function_coverage=1 00:21:20.992 --rc genhtml_legend=1 00:21:20.992 --rc geninfo_all_blocks=1 00:21:20.992 --rc geninfo_unexecuted_blocks=1 00:21:20.992 00:21:20.992 ' 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:20.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.992 --rc genhtml_branch_coverage=1 00:21:20.992 --rc genhtml_function_coverage=1 00:21:20.992 --rc genhtml_legend=1 00:21:20.992 --rc geninfo_all_blocks=1 00:21:20.992 --rc geninfo_unexecuted_blocks=1 00:21:20.992 00:21:20.992 ' 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:20.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.992 --rc genhtml_branch_coverage=1 00:21:20.992 --rc genhtml_function_coverage=1 00:21:20.992 --rc genhtml_legend=1 00:21:20.992 --rc geninfo_all_blocks=1 00:21:20.992 --rc geninfo_unexecuted_blocks=1 00:21:20.992 00:21:20.992 ' 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:20.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.992 --rc genhtml_branch_coverage=1 00:21:20.992 --rc genhtml_function_coverage=1 00:21:20.992 --rc genhtml_legend=1 00:21:20.992 --rc geninfo_all_blocks=1 00:21:20.992 --rc geninfo_unexecuted_blocks=1 00:21:20.992 00:21:20.992 ' 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:20.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:21:20.992 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:20.993 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.993 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:20.993 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:20.993 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:20.993 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.993 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:20.993 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.993 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:20.993 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:20.993 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:21:20.993 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:27.717 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:27.717 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:27.717 Found net devices under 0000:31:00.0: cvl_0_0 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:27.717 Found net devices under 0000:31:00.1: cvl_0_1 00:21:27.717 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:27.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:21:27.718 00:21:27.718 --- 10.0.0.2 ping statistics --- 00:21:27.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.718 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:21:27.718 00:21:27.718 --- 10.0.0.1 ping statistics --- 00:21:27.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.718 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:27.718 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:27.718 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:21:27.718 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:27.718 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:27.718 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.718 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1714058 00:21:27.718 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1714058 00:21:27.718 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:27.718 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1714058 ']' 00:21:27.718 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.718 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:27.718 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.718 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:27.718 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.290 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:28.290 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:28.290 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:28.290 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:28.290 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.290 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.290 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1714212 00:21:28.290 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:28.290 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:28.290 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:21:28.291 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:21:28.291 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:28.291 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:21:28.291 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:21:28.291 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:21:28.291 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:28.291 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=e1276695da0f9cb8b469c9db5ce2d61582663538cb1115b5 00:21:28.291 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:21:28.291 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.IFd 00:21:28.291 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key e1276695da0f9cb8b469c9db5ce2d61582663538cb1115b5 0 00:21:28.291 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 e1276695da0f9cb8b469c9db5ce2d61582663538cb1115b5 0 00:21:28.291 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:21:28.291 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:21:28.291 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=e1276695da0f9cb8b469c9db5ce2d61582663538cb1115b5 00:21:28.291 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:21:28.291 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.IFd 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.IFd 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.IFd 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=86e38aa944f9ca1915562e1ecab76c896064736b2a0e6cf8e770ab3bc365d35b 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.Rhd 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 86e38aa944f9ca1915562e1ecab76c896064736b2a0e6cf8e770ab3bc365d35b 3 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 86e38aa944f9ca1915562e1ecab76c896064736b2a0e6cf8e770ab3bc365d35b 3 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=86e38aa944f9ca1915562e1ecab76c896064736b2a0e6cf8e770ab3bc365d35b 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.Rhd 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.Rhd 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Rhd 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=2aebeadcfe414cf30362a031bb822843 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.XyX 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 2aebeadcfe414cf30362a031bb822843 1 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 2aebeadcfe414cf30362a031bb822843 1 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=2aebeadcfe414cf30362a031bb822843 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.XyX 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.XyX 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.XyX 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=e0fa562ff19f2833dc97d40b2754e0db1f8498bb152556ef 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.zWV 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key e0fa562ff19f2833dc97d40b2754e0db1f8498bb152556ef 2 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 e0fa562ff19f2833dc97d40b2754e0db1f8498bb152556ef 2 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=e0fa562ff19f2833dc97d40b2754e0db1f8498bb152556ef 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.zWV 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.zWV 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.zWV 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:28.553 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:21:28.554 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:21:28.554 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:21:28.554 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:28.554 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=cc2d7136298b6536807948afd8076c6f9a8e0fcad1ebfc38 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.kMy 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key cc2d7136298b6536807948afd8076c6f9a8e0fcad1ebfc38 2 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 cc2d7136298b6536807948afd8076c6f9a8e0fcad1ebfc38 2 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=cc2d7136298b6536807948afd8076c6f9a8e0fcad1ebfc38 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.kMy 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.kMy 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.kMy 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=d09d44c5e0d3f0f89c8b561a99345236 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.nqi 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key d09d44c5e0d3f0f89c8b561a99345236 1 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 d09d44c5e0d3f0f89c8b561a99345236 1 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=d09d44c5e0d3f0f89c8b561a99345236 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.nqi 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.nqi 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.nqi 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=567720ffab44c4e007dbb048c518897b84b52eef6083d8e0fe9d5afdf5ea3a9b 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.DcF 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 567720ffab44c4e007dbb048c518897b84b52eef6083d8e0fe9d5afdf5ea3a9b 3 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 567720ffab44c4e007dbb048c518897b84b52eef6083d8e0fe9d5afdf5ea3a9b 3 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=567720ffab44c4e007dbb048c518897b84b52eef6083d8e0fe9d5afdf5ea3a9b 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.DcF 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.DcF 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.DcF 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1714058 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1714058 ']' 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:28.816 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.078 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:29.078 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:29.078 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1714212 /var/tmp/host.sock 00:21:29.078 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1714212 ']' 00:21:29.078 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:21:29.078 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:29.078 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:29.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:29.078 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:29.078 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.339 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:29.339 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:21:29.339 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:21:29.339 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.339 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.339 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.339 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:29.339 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.IFd 00:21:29.339 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.339 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.339 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.339 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.IFd 00:21:29.339 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.IFd 00:21:29.600 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Rhd ]] 00:21:29.600 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Rhd 00:21:29.600 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.600 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.600 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.600 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Rhd 00:21:29.600 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Rhd 00:21:29.600 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:29.600 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.XyX 00:21:29.600 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.600 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.600 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.600 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.XyX 00:21:29.600 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.XyX 00:21:29.861 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.zWV ]] 00:21:29.861 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zWV 00:21:29.861 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.861 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.861 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.861 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zWV 00:21:29.861 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zWV 00:21:30.122 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:30.122 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.kMy 00:21:30.122 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.122 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.122 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.122 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.kMy 00:21:30.122 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.kMy 00:21:30.383 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.nqi ]] 00:21:30.383 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nqi 00:21:30.383 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.383 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.383 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.383 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nqi 00:21:30.383 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nqi 00:21:30.383 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:30.383 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.DcF 00:21:30.383 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.383 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.383 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.383 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.DcF 00:21:30.383 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.DcF 00:21:30.644 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:21:30.644 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:30.644 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.644 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.644 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:30.644 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:30.905 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:21:30.905 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.905 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:30.905 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:30.905 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:30.905 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.905 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.905 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.905 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.905 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.905 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.905 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.905 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.166 00:21:31.166 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.166 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.166 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.166 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.166 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.166 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.166 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.166 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.166 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.166 { 00:21:31.166 "cntlid": 1, 00:21:31.166 "qid": 0, 00:21:31.166 "state": "enabled", 00:21:31.166 "thread": "nvmf_tgt_poll_group_000", 00:21:31.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:31.166 "listen_address": { 00:21:31.166 "trtype": "TCP", 00:21:31.166 "adrfam": "IPv4", 00:21:31.166 "traddr": "10.0.0.2", 00:21:31.166 "trsvcid": "4420" 00:21:31.166 }, 00:21:31.166 "peer_address": { 00:21:31.166 "trtype": "TCP", 00:21:31.166 "adrfam": "IPv4", 00:21:31.166 "traddr": "10.0.0.1", 00:21:31.166 "trsvcid": "34446" 00:21:31.166 }, 00:21:31.166 "auth": { 00:21:31.166 "state": "completed", 00:21:31.166 "digest": "sha256", 00:21:31.166 "dhgroup": "null" 00:21:31.166 } 00:21:31.166 } 00:21:31.166 ]' 00:21:31.166 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.427 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:31.427 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.427 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:31.427 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.427 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.427 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.427 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.689 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:21:31.689 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:21:32.261 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.261 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:32.261 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.261 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.261 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.261 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.261 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:32.261 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:32.522 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:21:32.522 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.522 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:32.522 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:32.522 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:32.522 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.522 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.522 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.522 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.522 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.522 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.522 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.522 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.783 00:21:32.783 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.783 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.783 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.783 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.783 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.783 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.783 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.783 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.783 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.783 { 00:21:32.783 "cntlid": 3, 00:21:32.783 "qid": 0, 00:21:32.783 "state": "enabled", 00:21:32.783 "thread": "nvmf_tgt_poll_group_000", 00:21:32.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:32.783 "listen_address": { 00:21:32.783 "trtype": "TCP", 00:21:32.783 "adrfam": "IPv4", 00:21:32.783 "traddr": "10.0.0.2", 00:21:32.783 "trsvcid": "4420" 00:21:32.783 }, 00:21:32.783 "peer_address": { 00:21:32.783 "trtype": "TCP", 00:21:32.783 "adrfam": "IPv4", 00:21:32.783 "traddr": "10.0.0.1", 00:21:32.783 "trsvcid": "34478" 00:21:32.783 }, 00:21:32.783 "auth": { 00:21:32.783 "state": "completed", 00:21:32.783 "digest": "sha256", 00:21:32.783 "dhgroup": "null" 00:21:32.783 } 00:21:32.783 } 00:21:32.783 ]' 00:21:33.044 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.044 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:33.044 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.044 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:33.044 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.044 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.044 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.044 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.305 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:21:33.305 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:21:33.876 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.876 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:33.876 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.876 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.876 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.876 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.876 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:33.876 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:34.136 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:21:34.136 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.136 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:34.136 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:34.136 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:34.136 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.136 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.136 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.136 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.136 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.136 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.136 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.136 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.396 00:21:34.396 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.396 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.396 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.396 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.396 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.396 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.396 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.396 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.396 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.396 { 00:21:34.396 "cntlid": 5, 00:21:34.396 "qid": 0, 00:21:34.396 "state": "enabled", 00:21:34.396 "thread": "nvmf_tgt_poll_group_000", 00:21:34.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:34.396 "listen_address": { 00:21:34.396 "trtype": "TCP", 00:21:34.396 "adrfam": "IPv4", 00:21:34.396 "traddr": "10.0.0.2", 00:21:34.396 "trsvcid": "4420" 00:21:34.396 }, 00:21:34.396 "peer_address": { 00:21:34.396 "trtype": "TCP", 00:21:34.396 "adrfam": "IPv4", 00:21:34.396 "traddr": "10.0.0.1", 00:21:34.396 "trsvcid": "34514" 00:21:34.396 }, 00:21:34.396 "auth": { 00:21:34.396 "state": "completed", 00:21:34.396 "digest": "sha256", 00:21:34.396 "dhgroup": "null" 00:21:34.396 } 00:21:34.396 } 00:21:34.396 ]' 00:21:34.396 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.656 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:34.656 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.656 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:34.656 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.656 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.656 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.656 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.916 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:21:34.917 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:21:35.488 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.488 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:35.488 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.488 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.488 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.488 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.488 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:35.488 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:35.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:21:35.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:35.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:35.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:35.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:35.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:35.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:35.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.010 00:21:36.010 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.010 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.010 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.010 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.010 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.010 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.010 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.010 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.010 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.010 { 00:21:36.010 "cntlid": 7, 00:21:36.010 "qid": 0, 00:21:36.010 "state": "enabled", 00:21:36.010 "thread": "nvmf_tgt_poll_group_000", 00:21:36.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:36.010 "listen_address": { 00:21:36.010 "trtype": "TCP", 00:21:36.010 "adrfam": "IPv4", 00:21:36.010 "traddr": "10.0.0.2", 00:21:36.010 "trsvcid": "4420" 00:21:36.010 }, 00:21:36.010 "peer_address": { 00:21:36.010 "trtype": "TCP", 00:21:36.010 "adrfam": "IPv4", 00:21:36.010 "traddr": "10.0.0.1", 00:21:36.010 "trsvcid": "34548" 00:21:36.010 }, 00:21:36.010 "auth": { 00:21:36.010 "state": "completed", 00:21:36.010 "digest": "sha256", 00:21:36.010 "dhgroup": "null" 00:21:36.010 } 00:21:36.010 } 00:21:36.010 ]' 00:21:36.010 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.271 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:36.271 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.271 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:36.271 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.271 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.271 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.271 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.531 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:21:36.531 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:21:37.103 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.103 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:37.103 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.103 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.103 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.103 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:37.103 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.103 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:37.103 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:37.364 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:21:37.364 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.364 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:37.364 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:37.364 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:37.364 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.364 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.364 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.364 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.364 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.364 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.364 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.364 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.364 00:21:37.625 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.625 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.625 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.625 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.625 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.625 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.625 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.625 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.625 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.625 { 00:21:37.625 "cntlid": 9, 00:21:37.625 "qid": 0, 00:21:37.625 "state": "enabled", 00:21:37.625 "thread": "nvmf_tgt_poll_group_000", 00:21:37.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:37.625 "listen_address": { 00:21:37.625 "trtype": "TCP", 00:21:37.625 "adrfam": "IPv4", 00:21:37.625 "traddr": "10.0.0.2", 00:21:37.625 "trsvcid": "4420" 00:21:37.625 }, 00:21:37.625 "peer_address": { 00:21:37.625 "trtype": "TCP", 00:21:37.625 "adrfam": "IPv4", 00:21:37.625 "traddr": "10.0.0.1", 00:21:37.625 "trsvcid": "46904" 00:21:37.625 }, 00:21:37.625 "auth": { 00:21:37.625 "state": "completed", 00:21:37.625 "digest": "sha256", 00:21:37.625 "dhgroup": "ffdhe2048" 00:21:37.625 } 00:21:37.625 } 00:21:37.625 ]' 00:21:37.625 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.625 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:37.625 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.885 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:37.885 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.885 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.885 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.885 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.146 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:21:38.146 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:21:38.716 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.716 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:38.716 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.716 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.716 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.716 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.716 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:38.716 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:38.977 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:38.977 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.977 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:38.977 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:38.977 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:38.977 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.977 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.977 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.977 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.977 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.977 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.977 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.977 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.237 00:21:39.237 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.238 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.238 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.238 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.238 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.238 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.238 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.238 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.238 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.238 { 00:21:39.238 "cntlid": 11, 00:21:39.238 "qid": 0, 00:21:39.238 "state": "enabled", 00:21:39.238 "thread": "nvmf_tgt_poll_group_000", 00:21:39.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:39.238 "listen_address": { 00:21:39.238 "trtype": "TCP", 00:21:39.238 "adrfam": "IPv4", 00:21:39.238 "traddr": "10.0.0.2", 00:21:39.238 "trsvcid": "4420" 00:21:39.238 }, 00:21:39.238 "peer_address": { 00:21:39.238 "trtype": "TCP", 00:21:39.238 "adrfam": "IPv4", 00:21:39.238 "traddr": "10.0.0.1", 00:21:39.238 "trsvcid": "46928" 00:21:39.238 }, 00:21:39.238 "auth": { 00:21:39.238 "state": "completed", 00:21:39.238 "digest": "sha256", 00:21:39.238 "dhgroup": "ffdhe2048" 00:21:39.238 } 00:21:39.238 } 00:21:39.238 ]' 00:21:39.238 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.498 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:39.498 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.498 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:39.498 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.498 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.498 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.498 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.758 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:21:39.758 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:21:40.329 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.329 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:40.329 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.329 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.329 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.329 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.329 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:40.329 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:40.589 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:21:40.589 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.589 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:40.589 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:40.589 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:40.589 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.589 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.589 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.590 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.590 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.590 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.590 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.590 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.850 00:21:40.850 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.850 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.850 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.111 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.111 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.111 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.111 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.111 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.111 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.111 { 00:21:41.111 "cntlid": 13, 00:21:41.111 "qid": 0, 00:21:41.111 "state": "enabled", 00:21:41.111 "thread": "nvmf_tgt_poll_group_000", 00:21:41.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:41.111 "listen_address": { 00:21:41.111 "trtype": "TCP", 00:21:41.111 "adrfam": "IPv4", 00:21:41.111 "traddr": "10.0.0.2", 00:21:41.111 "trsvcid": "4420" 00:21:41.111 }, 00:21:41.111 "peer_address": { 00:21:41.111 "trtype": "TCP", 00:21:41.111 "adrfam": "IPv4", 00:21:41.111 "traddr": "10.0.0.1", 00:21:41.111 "trsvcid": "46956" 00:21:41.111 }, 00:21:41.111 "auth": { 00:21:41.111 "state": "completed", 00:21:41.111 "digest": "sha256", 00:21:41.111 "dhgroup": "ffdhe2048" 00:21:41.111 } 00:21:41.111 } 00:21:41.111 ]' 00:21:41.111 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.111 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:41.111 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.111 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:41.111 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.111 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.111 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.111 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.371 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:21:41.371 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:21:41.941 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.201 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.461 00:21:42.461 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.461 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.461 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.721 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.721 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.721 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.721 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.721 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.721 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.721 { 00:21:42.721 "cntlid": 15, 00:21:42.721 "qid": 0, 00:21:42.721 "state": "enabled", 00:21:42.721 "thread": "nvmf_tgt_poll_group_000", 00:21:42.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:42.721 "listen_address": { 00:21:42.721 "trtype": "TCP", 00:21:42.721 "adrfam": "IPv4", 00:21:42.721 "traddr": "10.0.0.2", 00:21:42.721 "trsvcid": "4420" 00:21:42.721 }, 00:21:42.721 "peer_address": { 00:21:42.721 "trtype": "TCP", 00:21:42.721 "adrfam": "IPv4", 00:21:42.721 "traddr": "10.0.0.1", 00:21:42.721 "trsvcid": "46978" 00:21:42.721 }, 00:21:42.722 "auth": { 00:21:42.722 "state": "completed", 00:21:42.722 "digest": "sha256", 00:21:42.722 "dhgroup": "ffdhe2048" 00:21:42.722 } 00:21:42.722 } 00:21:42.722 ]' 00:21:42.722 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.722 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:42.722 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.722 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:42.722 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.982 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.982 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.982 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.982 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:21:42.982 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:21:43.554 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.554 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:43.554 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.554 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.554 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.554 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:43.554 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.554 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:43.554 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:43.814 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:21:43.815 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.815 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:43.815 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:43.815 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:43.815 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.815 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.815 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.815 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.815 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.815 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.815 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.815 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.075 00:21:44.075 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.075 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.075 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.336 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.336 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.336 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.336 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.336 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.336 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.336 { 00:21:44.336 "cntlid": 17, 00:21:44.336 "qid": 0, 00:21:44.336 "state": "enabled", 00:21:44.336 "thread": "nvmf_tgt_poll_group_000", 00:21:44.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:44.336 "listen_address": { 00:21:44.336 "trtype": "TCP", 00:21:44.336 "adrfam": "IPv4", 00:21:44.336 "traddr": "10.0.0.2", 00:21:44.336 "trsvcid": "4420" 00:21:44.336 }, 00:21:44.336 "peer_address": { 00:21:44.336 "trtype": "TCP", 00:21:44.336 "adrfam": "IPv4", 00:21:44.336 "traddr": "10.0.0.1", 00:21:44.336 "trsvcid": "47006" 00:21:44.336 }, 00:21:44.336 "auth": { 00:21:44.336 "state": "completed", 00:21:44.336 "digest": "sha256", 00:21:44.336 "dhgroup": "ffdhe3072" 00:21:44.336 } 00:21:44.336 } 00:21:44.336 ]' 00:21:44.336 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.336 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:44.336 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.336 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:44.336 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.336 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.336 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.336 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.596 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:21:44.596 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:21:45.166 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.166 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:45.166 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.166 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.166 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.166 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.166 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:45.166 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:45.426 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:21:45.426 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.426 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:45.426 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:45.426 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:45.426 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.426 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.426 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.426 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.426 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.426 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.426 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.426 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.687 00:21:45.687 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.687 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.687 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.948 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.948 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.948 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.948 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.948 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.948 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.948 { 00:21:45.948 "cntlid": 19, 00:21:45.948 "qid": 0, 00:21:45.948 "state": "enabled", 00:21:45.948 "thread": "nvmf_tgt_poll_group_000", 00:21:45.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:45.948 "listen_address": { 00:21:45.948 "trtype": "TCP", 00:21:45.948 "adrfam": "IPv4", 00:21:45.948 "traddr": "10.0.0.2", 00:21:45.948 "trsvcid": "4420" 00:21:45.948 }, 00:21:45.948 "peer_address": { 00:21:45.948 "trtype": "TCP", 00:21:45.948 "adrfam": "IPv4", 00:21:45.948 "traddr": "10.0.0.1", 00:21:45.948 "trsvcid": "47042" 00:21:45.948 }, 00:21:45.948 "auth": { 00:21:45.948 "state": "completed", 00:21:45.948 "digest": "sha256", 00:21:45.948 "dhgroup": "ffdhe3072" 00:21:45.948 } 00:21:45.948 } 00:21:45.948 ]' 00:21:45.948 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.948 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:45.948 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.948 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:45.948 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.948 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.948 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.948 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.209 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:21:46.209 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:21:46.779 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.779 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:46.779 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.779 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.040 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.040 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.040 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:47.040 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:47.040 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:21:47.040 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.040 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:47.040 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:47.040 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:47.040 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.040 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.040 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.040 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.040 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.040 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.040 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.040 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.301 00:21:47.301 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.301 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.301 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.563 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.563 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.563 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.563 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.563 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.563 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.563 { 00:21:47.563 "cntlid": 21, 00:21:47.563 "qid": 0, 00:21:47.563 "state": "enabled", 00:21:47.563 "thread": "nvmf_tgt_poll_group_000", 00:21:47.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:47.563 "listen_address": { 00:21:47.563 "trtype": "TCP", 00:21:47.563 "adrfam": "IPv4", 00:21:47.563 "traddr": "10.0.0.2", 00:21:47.563 "trsvcid": "4420" 00:21:47.563 }, 00:21:47.563 "peer_address": { 00:21:47.563 "trtype": "TCP", 00:21:47.563 "adrfam": "IPv4", 00:21:47.563 "traddr": "10.0.0.1", 00:21:47.563 "trsvcid": "34458" 00:21:47.563 }, 00:21:47.563 "auth": { 00:21:47.563 "state": "completed", 00:21:47.563 "digest": "sha256", 00:21:47.563 "dhgroup": "ffdhe3072" 00:21:47.563 } 00:21:47.563 } 00:21:47.563 ]' 00:21:47.563 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.563 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:47.563 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.563 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:47.563 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.563 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.563 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.563 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.825 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:21:47.825 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:21:48.396 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.396 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:48.396 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.396 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.396 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.396 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.396 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:48.396 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:48.657 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:21:48.657 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.657 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:48.657 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:48.657 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:48.657 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.657 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:48.657 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.657 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.657 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.657 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:48.657 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:48.657 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:48.917 00:21:48.917 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.917 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.917 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.178 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.178 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.178 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.178 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.178 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.178 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.178 { 00:21:49.178 "cntlid": 23, 00:21:49.178 "qid": 0, 00:21:49.178 "state": "enabled", 00:21:49.178 "thread": "nvmf_tgt_poll_group_000", 00:21:49.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:49.178 "listen_address": { 00:21:49.178 "trtype": "TCP", 00:21:49.178 "adrfam": "IPv4", 00:21:49.178 "traddr": "10.0.0.2", 00:21:49.178 "trsvcid": "4420" 00:21:49.178 }, 00:21:49.178 "peer_address": { 00:21:49.178 "trtype": "TCP", 00:21:49.178 "adrfam": "IPv4", 00:21:49.178 "traddr": "10.0.0.1", 00:21:49.178 "trsvcid": "34488" 00:21:49.178 }, 00:21:49.178 "auth": { 00:21:49.178 "state": "completed", 00:21:49.178 "digest": "sha256", 00:21:49.178 "dhgroup": "ffdhe3072" 00:21:49.178 } 00:21:49.178 } 00:21:49.178 ]' 00:21:49.178 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.178 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:49.178 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.178 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:49.178 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.178 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.178 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.178 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.439 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:21:49.439 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:21:50.011 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.011 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:50.011 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.011 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.011 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.011 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.011 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.011 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:50.011 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:50.272 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:21:50.272 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.272 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:50.272 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:50.272 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:50.272 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.272 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.272 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.272 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.272 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.272 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.272 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.272 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.532 00:21:50.532 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.532 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.532 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.792 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.792 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.792 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.792 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.792 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.792 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.792 { 00:21:50.792 "cntlid": 25, 00:21:50.792 "qid": 0, 00:21:50.792 "state": "enabled", 00:21:50.792 "thread": "nvmf_tgt_poll_group_000", 00:21:50.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:50.792 "listen_address": { 00:21:50.792 "trtype": "TCP", 00:21:50.792 "adrfam": "IPv4", 00:21:50.792 "traddr": "10.0.0.2", 00:21:50.792 "trsvcid": "4420" 00:21:50.792 }, 00:21:50.792 "peer_address": { 00:21:50.792 "trtype": "TCP", 00:21:50.792 "adrfam": "IPv4", 00:21:50.792 "traddr": "10.0.0.1", 00:21:50.792 "trsvcid": "34520" 00:21:50.792 }, 00:21:50.792 "auth": { 00:21:50.792 "state": "completed", 00:21:50.792 "digest": "sha256", 00:21:50.792 "dhgroup": "ffdhe4096" 00:21:50.792 } 00:21:50.792 } 00:21:50.792 ]' 00:21:50.792 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.792 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:50.792 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.792 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:50.792 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.792 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.792 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.792 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.052 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:21:51.052 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:21:51.622 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.883 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:51.883 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.883 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.883 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.883 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.883 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:51.883 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:51.883 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:21:51.883 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.883 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:51.883 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:51.883 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:51.883 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.883 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.883 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.883 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.883 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.884 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.884 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.884 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.144 00:21:52.144 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.144 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.144 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.404 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.404 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.405 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.405 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.405 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.405 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.405 { 00:21:52.405 "cntlid": 27, 00:21:52.405 "qid": 0, 00:21:52.405 "state": "enabled", 00:21:52.405 "thread": "nvmf_tgt_poll_group_000", 00:21:52.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:52.405 "listen_address": { 00:21:52.405 "trtype": "TCP", 00:21:52.405 "adrfam": "IPv4", 00:21:52.405 "traddr": "10.0.0.2", 00:21:52.405 "trsvcid": "4420" 00:21:52.405 }, 00:21:52.405 "peer_address": { 00:21:52.405 "trtype": "TCP", 00:21:52.405 "adrfam": "IPv4", 00:21:52.405 "traddr": "10.0.0.1", 00:21:52.405 "trsvcid": "34546" 00:21:52.405 }, 00:21:52.405 "auth": { 00:21:52.405 "state": "completed", 00:21:52.405 "digest": "sha256", 00:21:52.405 "dhgroup": "ffdhe4096" 00:21:52.405 } 00:21:52.405 } 00:21:52.405 ]' 00:21:52.405 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.405 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:52.405 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.405 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:52.405 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.666 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.666 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.666 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.666 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:21:52.666 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:21:53.238 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.500 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:53.500 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.500 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.500 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.500 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.500 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:53.500 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:53.500 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:21:53.500 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.500 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:53.500 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:53.500 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:53.500 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.500 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.500 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.500 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.500 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.500 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.500 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.500 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.760 00:21:53.760 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.761 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.761 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.022 { 00:21:54.022 "cntlid": 29, 00:21:54.022 "qid": 0, 00:21:54.022 "state": "enabled", 00:21:54.022 "thread": "nvmf_tgt_poll_group_000", 00:21:54.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:54.022 "listen_address": { 00:21:54.022 "trtype": "TCP", 00:21:54.022 "adrfam": "IPv4", 00:21:54.022 "traddr": "10.0.0.2", 00:21:54.022 "trsvcid": "4420" 00:21:54.022 }, 00:21:54.022 "peer_address": { 00:21:54.022 "trtype": "TCP", 00:21:54.022 "adrfam": "IPv4", 00:21:54.022 "traddr": "10.0.0.1", 00:21:54.022 "trsvcid": "34568" 00:21:54.022 }, 00:21:54.022 "auth": { 00:21:54.022 "state": "completed", 00:21:54.022 "digest": "sha256", 00:21:54.022 "dhgroup": "ffdhe4096" 00:21:54.022 } 00:21:54.022 } 00:21:54.022 ]' 00:21:54.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:54.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:54.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.283 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.283 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.283 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.283 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:21:54.283 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.224 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.485 00:21:55.485 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.485 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.485 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.747 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.747 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.747 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.747 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.747 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.747 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.747 { 00:21:55.747 "cntlid": 31, 00:21:55.747 "qid": 0, 00:21:55.747 "state": "enabled", 00:21:55.747 "thread": "nvmf_tgt_poll_group_000", 00:21:55.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:55.747 "listen_address": { 00:21:55.747 "trtype": "TCP", 00:21:55.747 "adrfam": "IPv4", 00:21:55.747 "traddr": "10.0.0.2", 00:21:55.747 "trsvcid": "4420" 00:21:55.747 }, 00:21:55.747 "peer_address": { 00:21:55.747 "trtype": "TCP", 00:21:55.747 "adrfam": "IPv4", 00:21:55.747 "traddr": "10.0.0.1", 00:21:55.747 "trsvcid": "34602" 00:21:55.747 }, 00:21:55.747 "auth": { 00:21:55.747 "state": "completed", 00:21:55.747 "digest": "sha256", 00:21:55.747 "dhgroup": "ffdhe4096" 00:21:55.747 } 00:21:55.747 } 00:21:55.747 ]' 00:21:55.747 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.747 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:55.747 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.747 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:55.747 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.747 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.747 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.747 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.008 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:21:56.008 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:21:56.581 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.581 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:56.581 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.581 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.581 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.581 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:56.581 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.581 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:56.581 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:56.842 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:21:56.842 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.842 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:56.842 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:56.842 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:56.842 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.842 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.842 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.842 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.842 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.842 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.842 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.842 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.103 00:21:57.103 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.103 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.103 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.364 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.364 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.364 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.364 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.364 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.364 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.364 { 00:21:57.364 "cntlid": 33, 00:21:57.364 "qid": 0, 00:21:57.364 "state": "enabled", 00:21:57.364 "thread": "nvmf_tgt_poll_group_000", 00:21:57.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:57.364 "listen_address": { 00:21:57.364 "trtype": "TCP", 00:21:57.364 "adrfam": "IPv4", 00:21:57.364 "traddr": "10.0.0.2", 00:21:57.364 "trsvcid": "4420" 00:21:57.364 }, 00:21:57.364 "peer_address": { 00:21:57.364 "trtype": "TCP", 00:21:57.364 "adrfam": "IPv4", 00:21:57.364 "traddr": "10.0.0.1", 00:21:57.364 "trsvcid": "34626" 00:21:57.364 }, 00:21:57.364 "auth": { 00:21:57.364 "state": "completed", 00:21:57.364 "digest": "sha256", 00:21:57.364 "dhgroup": "ffdhe6144" 00:21:57.364 } 00:21:57.364 } 00:21:57.364 ]' 00:21:57.364 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.364 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:57.364 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.627 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:57.627 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.627 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.627 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.627 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.627 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:21:57.627 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:21:58.572 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.143 00:21:59.143 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.143 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.143 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.143 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.143 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.143 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.143 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.143 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.143 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.143 { 00:21:59.143 "cntlid": 35, 00:21:59.143 "qid": 0, 00:21:59.143 "state": "enabled", 00:21:59.143 "thread": "nvmf_tgt_poll_group_000", 00:21:59.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:59.143 "listen_address": { 00:21:59.143 "trtype": "TCP", 00:21:59.143 "adrfam": "IPv4", 00:21:59.143 "traddr": "10.0.0.2", 00:21:59.143 "trsvcid": "4420" 00:21:59.143 }, 00:21:59.143 "peer_address": { 00:21:59.143 "trtype": "TCP", 00:21:59.143 "adrfam": "IPv4", 00:21:59.143 "traddr": "10.0.0.1", 00:21:59.143 "trsvcid": "39396" 00:21:59.143 }, 00:21:59.143 "auth": { 00:21:59.143 "state": "completed", 00:21:59.143 "digest": "sha256", 00:21:59.143 "dhgroup": "ffdhe6144" 00:21:59.143 } 00:21:59.143 } 00:21:59.143 ]' 00:21:59.143 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.404 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:59.404 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.404 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:59.404 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.404 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.404 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.404 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.665 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:21:59.665 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:22:00.237 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.237 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:00.237 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.237 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.237 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.237 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.237 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:00.237 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:00.498 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:22:00.498 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.498 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:00.498 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:00.498 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:00.498 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.498 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.498 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.498 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.498 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.498 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.498 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.498 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.758 00:22:00.758 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.758 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.758 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.019 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.019 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.019 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.019 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.019 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.019 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.019 { 00:22:01.019 "cntlid": 37, 00:22:01.019 "qid": 0, 00:22:01.019 "state": "enabled", 00:22:01.019 "thread": "nvmf_tgt_poll_group_000", 00:22:01.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:01.019 "listen_address": { 00:22:01.019 "trtype": "TCP", 00:22:01.019 "adrfam": "IPv4", 00:22:01.019 "traddr": "10.0.0.2", 00:22:01.019 "trsvcid": "4420" 00:22:01.019 }, 00:22:01.019 "peer_address": { 00:22:01.019 "trtype": "TCP", 00:22:01.019 "adrfam": "IPv4", 00:22:01.019 "traddr": "10.0.0.1", 00:22:01.019 "trsvcid": "39426" 00:22:01.019 }, 00:22:01.019 "auth": { 00:22:01.019 "state": "completed", 00:22:01.019 "digest": "sha256", 00:22:01.019 "dhgroup": "ffdhe6144" 00:22:01.019 } 00:22:01.019 } 00:22:01.019 ]' 00:22:01.019 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.019 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:01.019 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.019 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:01.019 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.019 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.019 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.019 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.281 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:22:01.281 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:22:01.852 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.852 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:01.852 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.852 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.852 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.852 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.852 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:01.852 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:02.113 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:22:02.113 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.113 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:02.113 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:02.113 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:02.113 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.113 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:02.113 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.113 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.113 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.113 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.113 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.113 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.373 00:22:02.373 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.373 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.373 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.634 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.634 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.634 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.634 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.634 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.634 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.634 { 00:22:02.635 "cntlid": 39, 00:22:02.635 "qid": 0, 00:22:02.635 "state": "enabled", 00:22:02.635 "thread": "nvmf_tgt_poll_group_000", 00:22:02.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:02.635 "listen_address": { 00:22:02.635 "trtype": "TCP", 00:22:02.635 "adrfam": "IPv4", 00:22:02.635 "traddr": "10.0.0.2", 00:22:02.635 "trsvcid": "4420" 00:22:02.635 }, 00:22:02.635 "peer_address": { 00:22:02.635 "trtype": "TCP", 00:22:02.635 "adrfam": "IPv4", 00:22:02.635 "traddr": "10.0.0.1", 00:22:02.635 "trsvcid": "39448" 00:22:02.635 }, 00:22:02.635 "auth": { 00:22:02.635 "state": "completed", 00:22:02.635 "digest": "sha256", 00:22:02.635 "dhgroup": "ffdhe6144" 00:22:02.635 } 00:22:02.635 } 00:22:02.635 ]' 00:22:02.635 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.635 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:02.635 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.635 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:02.635 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.635 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.635 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.635 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.895 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:22:02.895 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:22:03.468 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.468 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:03.468 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.468 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.468 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.468 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:03.468 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.468 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:03.468 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:03.728 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:22:03.728 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.728 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:03.728 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:03.728 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:03.728 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.728 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.728 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.728 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.728 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.728 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.728 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.729 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.403 00:22:04.403 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.403 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.403 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.403 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.403 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.403 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.403 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.403 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.403 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.403 { 00:22:04.403 "cntlid": 41, 00:22:04.403 "qid": 0, 00:22:04.403 "state": "enabled", 00:22:04.403 "thread": "nvmf_tgt_poll_group_000", 00:22:04.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:04.403 "listen_address": { 00:22:04.403 "trtype": "TCP", 00:22:04.403 "adrfam": "IPv4", 00:22:04.403 "traddr": "10.0.0.2", 00:22:04.403 "trsvcid": "4420" 00:22:04.403 }, 00:22:04.403 "peer_address": { 00:22:04.403 "trtype": "TCP", 00:22:04.403 "adrfam": "IPv4", 00:22:04.403 "traddr": "10.0.0.1", 00:22:04.403 "trsvcid": "39484" 00:22:04.403 }, 00:22:04.403 "auth": { 00:22:04.403 "state": "completed", 00:22:04.403 "digest": "sha256", 00:22:04.403 "dhgroup": "ffdhe8192" 00:22:04.403 } 00:22:04.403 } 00:22:04.403 ]' 00:22:04.403 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.403 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:04.403 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.712 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:04.712 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.712 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.712 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.712 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.712 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:22:04.712 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:22:05.283 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.543 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:05.543 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.543 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.543 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.543 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.543 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:05.543 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:05.544 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:22:05.544 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.544 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:05.544 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:05.544 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:05.544 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.544 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.544 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.544 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.544 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.544 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.544 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.544 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.114 00:22:06.114 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.114 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.114 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.375 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.375 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.375 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.375 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.375 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.375 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.375 { 00:22:06.375 "cntlid": 43, 00:22:06.375 "qid": 0, 00:22:06.375 "state": "enabled", 00:22:06.375 "thread": "nvmf_tgt_poll_group_000", 00:22:06.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:06.375 "listen_address": { 00:22:06.375 "trtype": "TCP", 00:22:06.375 "adrfam": "IPv4", 00:22:06.375 "traddr": "10.0.0.2", 00:22:06.375 "trsvcid": "4420" 00:22:06.375 }, 00:22:06.375 "peer_address": { 00:22:06.375 "trtype": "TCP", 00:22:06.375 "adrfam": "IPv4", 00:22:06.375 "traddr": "10.0.0.1", 00:22:06.375 "trsvcid": "39510" 00:22:06.375 }, 00:22:06.375 "auth": { 00:22:06.375 "state": "completed", 00:22:06.375 "digest": "sha256", 00:22:06.375 "dhgroup": "ffdhe8192" 00:22:06.375 } 00:22:06.375 } 00:22:06.375 ]' 00:22:06.375 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.375 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:06.375 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.375 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:06.375 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.375 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.375 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.375 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.636 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:22:06.636 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:22:07.208 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.208 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:07.208 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.208 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.208 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.208 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.208 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:07.208 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:07.468 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:22:07.468 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.468 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:07.468 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:07.468 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:07.468 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.468 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.468 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.468 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.468 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.468 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.468 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.468 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.040 00:22:08.040 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.040 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.040 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.040 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.040 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.040 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.040 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.040 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.040 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.040 { 00:22:08.040 "cntlid": 45, 00:22:08.040 "qid": 0, 00:22:08.040 "state": "enabled", 00:22:08.040 "thread": "nvmf_tgt_poll_group_000", 00:22:08.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:08.040 "listen_address": { 00:22:08.040 "trtype": "TCP", 00:22:08.040 "adrfam": "IPv4", 00:22:08.040 "traddr": "10.0.0.2", 00:22:08.040 "trsvcid": "4420" 00:22:08.040 }, 00:22:08.040 "peer_address": { 00:22:08.040 "trtype": "TCP", 00:22:08.040 "adrfam": "IPv4", 00:22:08.040 "traddr": "10.0.0.1", 00:22:08.040 "trsvcid": "39068" 00:22:08.040 }, 00:22:08.040 "auth": { 00:22:08.040 "state": "completed", 00:22:08.040 "digest": "sha256", 00:22:08.040 "dhgroup": "ffdhe8192" 00:22:08.040 } 00:22:08.040 } 00:22:08.041 ]' 00:22:08.041 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.301 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:08.301 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.301 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:08.301 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.301 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.301 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.301 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.562 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:22:08.562 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:22:09.134 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.134 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:09.134 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.134 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.134 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.134 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.134 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:09.134 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:09.394 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:22:09.394 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.394 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:09.394 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:09.394 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:09.394 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.394 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:09.394 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.394 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.394 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.394 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:09.394 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.394 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.966 00:22:09.966 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.966 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.966 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.966 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.966 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.966 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.966 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.966 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.966 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.966 { 00:22:09.966 "cntlid": 47, 00:22:09.966 "qid": 0, 00:22:09.966 "state": "enabled", 00:22:09.966 "thread": "nvmf_tgt_poll_group_000", 00:22:09.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:09.966 "listen_address": { 00:22:09.966 "trtype": "TCP", 00:22:09.966 "adrfam": "IPv4", 00:22:09.966 "traddr": "10.0.0.2", 00:22:09.966 "trsvcid": "4420" 00:22:09.966 }, 00:22:09.966 "peer_address": { 00:22:09.966 "trtype": "TCP", 00:22:09.966 "adrfam": "IPv4", 00:22:09.966 "traddr": "10.0.0.1", 00:22:09.966 "trsvcid": "39086" 00:22:09.966 }, 00:22:09.966 "auth": { 00:22:09.966 "state": "completed", 00:22:09.966 "digest": "sha256", 00:22:09.966 "dhgroup": "ffdhe8192" 00:22:09.966 } 00:22:09.966 } 00:22:09.966 ]' 00:22:09.966 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.966 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:09.966 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.966 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:09.966 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.226 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.226 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.226 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.226 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:22:10.226 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:22:10.796 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.057 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.318 00:22:11.318 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.318 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.318 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.579 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.579 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.579 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.579 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.579 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.579 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.579 { 00:22:11.579 "cntlid": 49, 00:22:11.579 "qid": 0, 00:22:11.579 "state": "enabled", 00:22:11.579 "thread": "nvmf_tgt_poll_group_000", 00:22:11.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:11.579 "listen_address": { 00:22:11.579 "trtype": "TCP", 00:22:11.579 "adrfam": "IPv4", 00:22:11.579 "traddr": "10.0.0.2", 00:22:11.579 "trsvcid": "4420" 00:22:11.579 }, 00:22:11.579 "peer_address": { 00:22:11.579 "trtype": "TCP", 00:22:11.579 "adrfam": "IPv4", 00:22:11.579 "traddr": "10.0.0.1", 00:22:11.579 "trsvcid": "39116" 00:22:11.579 }, 00:22:11.579 "auth": { 00:22:11.579 "state": "completed", 00:22:11.579 "digest": "sha384", 00:22:11.579 "dhgroup": "null" 00:22:11.579 } 00:22:11.579 } 00:22:11.579 ]' 00:22:11.579 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.579 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:11.579 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.579 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:11.579 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.579 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.579 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.579 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.840 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:22:11.840 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:22:12.411 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.411 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:12.411 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.411 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.411 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.411 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.411 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:12.411 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:12.673 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:22:12.673 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.673 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:12.673 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:12.673 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:12.673 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.673 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.673 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.673 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.673 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.673 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.673 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.673 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.933 00:22:12.933 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.933 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.933 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.195 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.195 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.195 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.195 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.195 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.195 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.195 { 00:22:13.195 "cntlid": 51, 00:22:13.195 "qid": 0, 00:22:13.195 "state": "enabled", 00:22:13.195 "thread": "nvmf_tgt_poll_group_000", 00:22:13.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:13.195 "listen_address": { 00:22:13.195 "trtype": "TCP", 00:22:13.195 "adrfam": "IPv4", 00:22:13.195 "traddr": "10.0.0.2", 00:22:13.195 "trsvcid": "4420" 00:22:13.195 }, 00:22:13.195 "peer_address": { 00:22:13.195 "trtype": "TCP", 00:22:13.195 "adrfam": "IPv4", 00:22:13.195 "traddr": "10.0.0.1", 00:22:13.195 "trsvcid": "39138" 00:22:13.195 }, 00:22:13.195 "auth": { 00:22:13.195 "state": "completed", 00:22:13.195 "digest": "sha384", 00:22:13.195 "dhgroup": "null" 00:22:13.195 } 00:22:13.195 } 00:22:13.195 ]' 00:22:13.195 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.195 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:13.195 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.195 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:13.196 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.196 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.196 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.196 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.455 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:22:13.455 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:22:14.025 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.025 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:14.025 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.025 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.025 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.025 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.025 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:14.025 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:14.285 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:22:14.285 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.285 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:14.285 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:14.285 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:14.285 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.285 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.285 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.285 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.285 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.285 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.285 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.285 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.544 00:22:14.544 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:14.544 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.544 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.804 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.804 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.804 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.804 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.804 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.804 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.804 { 00:22:14.804 "cntlid": 53, 00:22:14.804 "qid": 0, 00:22:14.804 "state": "enabled", 00:22:14.804 "thread": "nvmf_tgt_poll_group_000", 00:22:14.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:14.804 "listen_address": { 00:22:14.804 "trtype": "TCP", 00:22:14.804 "adrfam": "IPv4", 00:22:14.804 "traddr": "10.0.0.2", 00:22:14.804 "trsvcid": "4420" 00:22:14.804 }, 00:22:14.804 "peer_address": { 00:22:14.804 "trtype": "TCP", 00:22:14.804 "adrfam": "IPv4", 00:22:14.804 "traddr": "10.0.0.1", 00:22:14.804 "trsvcid": "39176" 00:22:14.804 }, 00:22:14.804 "auth": { 00:22:14.804 "state": "completed", 00:22:14.804 "digest": "sha384", 00:22:14.804 "dhgroup": "null" 00:22:14.804 } 00:22:14.804 } 00:22:14.804 ]' 00:22:14.804 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.804 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:14.804 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:14.804 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:14.804 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:14.804 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.804 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.804 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.064 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:22:15.064 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:22:15.634 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.634 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:15.634 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.634 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.634 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.634 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.634 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:15.634 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:15.894 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:22:15.894 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.894 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:15.894 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:15.894 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:15.894 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.894 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:15.894 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.894 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.894 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.894 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:15.895 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:15.895 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.154 00:22:16.154 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.154 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.154 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.414 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.414 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.414 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.414 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.414 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.414 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.414 { 00:22:16.414 "cntlid": 55, 00:22:16.414 "qid": 0, 00:22:16.414 "state": "enabled", 00:22:16.414 "thread": "nvmf_tgt_poll_group_000", 00:22:16.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:16.414 "listen_address": { 00:22:16.414 "trtype": "TCP", 00:22:16.414 "adrfam": "IPv4", 00:22:16.414 "traddr": "10.0.0.2", 00:22:16.414 "trsvcid": "4420" 00:22:16.414 }, 00:22:16.414 "peer_address": { 00:22:16.414 "trtype": "TCP", 00:22:16.414 "adrfam": "IPv4", 00:22:16.414 "traddr": "10.0.0.1", 00:22:16.414 "trsvcid": "39198" 00:22:16.414 }, 00:22:16.414 "auth": { 00:22:16.414 "state": "completed", 00:22:16.414 "digest": "sha384", 00:22:16.414 "dhgroup": "null" 00:22:16.414 } 00:22:16.414 } 00:22:16.414 ]' 00:22:16.415 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.415 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:16.415 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.415 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:16.415 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.415 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.415 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.415 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.675 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:22:16.675 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:22:17.246 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.246 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:17.246 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.246 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.246 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.246 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:17.246 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.246 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:17.246 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:17.507 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:22:17.507 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.507 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:17.507 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:17.507 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:17.507 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.507 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.507 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.507 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.507 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.507 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.507 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.507 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.768 00:22:17.768 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.768 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:17.768 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.768 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.768 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.768 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.768 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.028 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.028 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.028 { 00:22:18.028 "cntlid": 57, 00:22:18.028 "qid": 0, 00:22:18.028 "state": "enabled", 00:22:18.028 "thread": "nvmf_tgt_poll_group_000", 00:22:18.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:18.028 "listen_address": { 00:22:18.028 "trtype": "TCP", 00:22:18.028 "adrfam": "IPv4", 00:22:18.028 "traddr": "10.0.0.2", 00:22:18.028 "trsvcid": "4420" 00:22:18.028 }, 00:22:18.028 "peer_address": { 00:22:18.028 "trtype": "TCP", 00:22:18.028 "adrfam": "IPv4", 00:22:18.028 "traddr": "10.0.0.1", 00:22:18.028 "trsvcid": "45428" 00:22:18.028 }, 00:22:18.028 "auth": { 00:22:18.028 "state": "completed", 00:22:18.028 "digest": "sha384", 00:22:18.028 "dhgroup": "ffdhe2048" 00:22:18.028 } 00:22:18.028 } 00:22:18.028 ]' 00:22:18.028 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.028 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:18.028 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.028 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:18.028 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.028 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.028 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.029 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.290 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:22:18.290 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:22:18.863 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.863 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:18.863 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.863 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.863 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.863 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:18.863 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:18.863 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:19.123 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:22:19.123 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.123 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:19.123 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:19.123 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:19.123 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.123 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.123 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.123 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.123 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.123 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.123 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.123 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.383 00:22:19.383 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.383 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.383 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.383 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.383 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.383 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.383 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.643 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.643 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.643 { 00:22:19.643 "cntlid": 59, 00:22:19.643 "qid": 0, 00:22:19.643 "state": "enabled", 00:22:19.643 "thread": "nvmf_tgt_poll_group_000", 00:22:19.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:19.643 "listen_address": { 00:22:19.643 "trtype": "TCP", 00:22:19.643 "adrfam": "IPv4", 00:22:19.643 "traddr": "10.0.0.2", 00:22:19.643 "trsvcid": "4420" 00:22:19.643 }, 00:22:19.643 "peer_address": { 00:22:19.643 "trtype": "TCP", 00:22:19.643 "adrfam": "IPv4", 00:22:19.643 "traddr": "10.0.0.1", 00:22:19.643 "trsvcid": "45462" 00:22:19.643 }, 00:22:19.643 "auth": { 00:22:19.643 "state": "completed", 00:22:19.643 "digest": "sha384", 00:22:19.643 "dhgroup": "ffdhe2048" 00:22:19.643 } 00:22:19.643 } 00:22:19.643 ]' 00:22:19.643 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.643 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:19.643 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.643 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:19.643 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.643 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.643 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.643 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.903 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:22:19.903 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:22:20.473 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.473 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:20.473 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.473 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.473 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.473 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:20.473 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:20.473 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:20.734 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:22:20.734 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.734 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:20.734 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:20.734 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:20.734 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.734 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.734 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.734 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.734 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.734 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.734 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.734 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.995 00:22:20.995 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.995 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.995 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.255 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.255 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.255 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.255 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.255 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.255 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.255 { 00:22:21.255 "cntlid": 61, 00:22:21.255 "qid": 0, 00:22:21.255 "state": "enabled", 00:22:21.255 "thread": "nvmf_tgt_poll_group_000", 00:22:21.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:21.255 "listen_address": { 00:22:21.255 "trtype": "TCP", 00:22:21.255 "adrfam": "IPv4", 00:22:21.255 "traddr": "10.0.0.2", 00:22:21.255 "trsvcid": "4420" 00:22:21.255 }, 00:22:21.255 "peer_address": { 00:22:21.255 "trtype": "TCP", 00:22:21.255 "adrfam": "IPv4", 00:22:21.255 "traddr": "10.0.0.1", 00:22:21.255 "trsvcid": "45502" 00:22:21.255 }, 00:22:21.255 "auth": { 00:22:21.255 "state": "completed", 00:22:21.255 "digest": "sha384", 00:22:21.255 "dhgroup": "ffdhe2048" 00:22:21.255 } 00:22:21.255 } 00:22:21.255 ]' 00:22:21.255 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.255 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:21.255 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.255 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:21.255 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.255 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.255 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.255 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.515 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:22:21.515 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:22:22.087 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.087 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:22.087 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.087 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.087 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.087 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:22.087 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:22.087 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:22.348 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:22:22.348 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.348 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:22.348 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:22.348 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:22.348 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.348 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:22.348 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.348 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.348 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.348 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:22.348 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:22.348 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:22.609 00:22:22.609 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.609 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.609 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.869 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.869 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.869 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.869 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.869 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.869 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.869 { 00:22:22.869 "cntlid": 63, 00:22:22.869 "qid": 0, 00:22:22.869 "state": "enabled", 00:22:22.869 "thread": "nvmf_tgt_poll_group_000", 00:22:22.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:22.869 "listen_address": { 00:22:22.869 "trtype": "TCP", 00:22:22.869 "adrfam": "IPv4", 00:22:22.869 "traddr": "10.0.0.2", 00:22:22.869 "trsvcid": "4420" 00:22:22.869 }, 00:22:22.869 "peer_address": { 00:22:22.869 "trtype": "TCP", 00:22:22.869 "adrfam": "IPv4", 00:22:22.869 "traddr": "10.0.0.1", 00:22:22.869 "trsvcid": "45510" 00:22:22.869 }, 00:22:22.869 "auth": { 00:22:22.869 "state": "completed", 00:22:22.869 "digest": "sha384", 00:22:22.869 "dhgroup": "ffdhe2048" 00:22:22.869 } 00:22:22.869 } 00:22:22.869 ]' 00:22:22.869 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.869 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:22.869 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.869 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:22.869 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.869 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.869 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.869 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.130 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:22:23.130 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:22:23.701 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.701 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:23.701 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.701 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.701 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.701 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:23.701 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:23.701 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:23.701 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:23.961 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:22:23.961 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.961 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:23.961 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:23.961 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:23.961 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.961 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.961 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.961 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.961 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.961 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.961 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.961 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.222 00:22:24.222 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.222 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.222 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.222 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.222 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.222 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.222 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.482 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.482 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:24.482 { 00:22:24.482 "cntlid": 65, 00:22:24.482 "qid": 0, 00:22:24.482 "state": "enabled", 00:22:24.482 "thread": "nvmf_tgt_poll_group_000", 00:22:24.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:24.482 "listen_address": { 00:22:24.482 "trtype": "TCP", 00:22:24.482 "adrfam": "IPv4", 00:22:24.482 "traddr": "10.0.0.2", 00:22:24.482 "trsvcid": "4420" 00:22:24.482 }, 00:22:24.482 "peer_address": { 00:22:24.482 "trtype": "TCP", 00:22:24.482 "adrfam": "IPv4", 00:22:24.482 "traddr": "10.0.0.1", 00:22:24.482 "trsvcid": "45530" 00:22:24.482 }, 00:22:24.482 "auth": { 00:22:24.482 "state": "completed", 00:22:24.482 "digest": "sha384", 00:22:24.482 "dhgroup": "ffdhe3072" 00:22:24.482 } 00:22:24.482 } 00:22:24.482 ]' 00:22:24.482 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:24.482 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:24.482 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:24.482 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:24.482 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:24.482 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.482 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.482 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.742 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:22:24.742 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:22:25.313 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.313 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:25.313 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.313 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.313 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.313 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.313 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:25.313 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:25.574 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:22:25.574 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:25.574 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:25.574 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:25.574 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:25.574 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.574 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.574 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.574 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.574 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.574 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.574 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.574 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.836 00:22:25.836 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.836 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:25.836 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.836 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.836 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.836 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.836 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.836 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.836 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:25.836 { 00:22:25.836 "cntlid": 67, 00:22:25.836 "qid": 0, 00:22:25.836 "state": "enabled", 00:22:25.836 "thread": "nvmf_tgt_poll_group_000", 00:22:25.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:25.836 "listen_address": { 00:22:25.836 "trtype": "TCP", 00:22:25.836 "adrfam": "IPv4", 00:22:25.836 "traddr": "10.0.0.2", 00:22:25.836 "trsvcid": "4420" 00:22:25.836 }, 00:22:25.836 "peer_address": { 00:22:25.836 "trtype": "TCP", 00:22:25.836 "adrfam": "IPv4", 00:22:25.836 "traddr": "10.0.0.1", 00:22:25.836 "trsvcid": "45562" 00:22:25.836 }, 00:22:25.836 "auth": { 00:22:25.836 "state": "completed", 00:22:25.836 "digest": "sha384", 00:22:25.836 "dhgroup": "ffdhe3072" 00:22:25.836 } 00:22:25.836 } 00:22:25.836 ]' 00:22:26.097 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.097 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:26.097 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.097 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:26.097 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:26.097 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.097 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.097 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.358 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:22:26.358 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:22:26.929 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.930 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:26.930 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.930 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.930 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.930 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:26.930 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:26.930 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:27.191 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:22:27.191 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:27.191 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:27.191 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:27.191 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:27.191 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.191 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:27.191 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.191 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.191 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.191 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:27.191 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:27.191 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:27.452 00:22:27.452 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.452 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.452 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.452 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.452 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.452 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.452 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.452 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.452 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.452 { 00:22:27.452 "cntlid": 69, 00:22:27.452 "qid": 0, 00:22:27.452 "state": "enabled", 00:22:27.452 "thread": "nvmf_tgt_poll_group_000", 00:22:27.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:27.452 "listen_address": { 00:22:27.452 "trtype": "TCP", 00:22:27.452 "adrfam": "IPv4", 00:22:27.452 "traddr": "10.0.0.2", 00:22:27.452 "trsvcid": "4420" 00:22:27.452 }, 00:22:27.452 "peer_address": { 00:22:27.452 "trtype": "TCP", 00:22:27.452 "adrfam": "IPv4", 00:22:27.452 "traddr": "10.0.0.1", 00:22:27.452 "trsvcid": "52316" 00:22:27.452 }, 00:22:27.452 "auth": { 00:22:27.452 "state": "completed", 00:22:27.452 "digest": "sha384", 00:22:27.452 "dhgroup": "ffdhe3072" 00:22:27.452 } 00:22:27.452 } 00:22:27.452 ]' 00:22:27.452 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.714 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:27.714 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.714 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:27.714 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.714 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.714 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.714 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.975 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:22:27.975 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:22:28.546 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.546 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:28.546 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.546 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.546 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.546 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.546 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:28.546 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:28.807 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:22:28.807 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.807 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:28.807 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:28.807 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:28.807 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.807 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:28.807 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.807 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.807 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.807 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:28.807 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:28.807 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:29.067 00:22:29.067 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.067 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.067 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.067 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.067 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.067 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.067 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.067 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.067 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.067 { 00:22:29.067 "cntlid": 71, 00:22:29.067 "qid": 0, 00:22:29.067 "state": "enabled", 00:22:29.067 "thread": "nvmf_tgt_poll_group_000", 00:22:29.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:29.067 "listen_address": { 00:22:29.067 "trtype": "TCP", 00:22:29.067 "adrfam": "IPv4", 00:22:29.067 "traddr": "10.0.0.2", 00:22:29.067 "trsvcid": "4420" 00:22:29.067 }, 00:22:29.067 "peer_address": { 00:22:29.067 "trtype": "TCP", 00:22:29.067 "adrfam": "IPv4", 00:22:29.067 "traddr": "10.0.0.1", 00:22:29.067 "trsvcid": "52356" 00:22:29.067 }, 00:22:29.067 "auth": { 00:22:29.067 "state": "completed", 00:22:29.067 "digest": "sha384", 00:22:29.067 "dhgroup": "ffdhe3072" 00:22:29.067 } 00:22:29.067 } 00:22:29.067 ]' 00:22:29.067 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.327 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:29.327 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.327 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:29.327 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.327 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.327 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.327 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.588 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:22:29.588 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:22:30.159 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.159 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:30.159 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.159 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.159 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.159 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:30.159 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.159 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:30.159 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:30.420 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:22:30.420 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.420 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:30.420 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:30.420 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:30.420 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.420 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.420 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.420 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.420 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.420 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.420 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.420 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.680 00:22:30.680 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.680 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.680 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.680 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.680 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.680 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.680 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.942 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.942 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.942 { 00:22:30.942 "cntlid": 73, 00:22:30.942 "qid": 0, 00:22:30.942 "state": "enabled", 00:22:30.942 "thread": "nvmf_tgt_poll_group_000", 00:22:30.942 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:30.942 "listen_address": { 00:22:30.942 "trtype": "TCP", 00:22:30.942 "adrfam": "IPv4", 00:22:30.942 "traddr": "10.0.0.2", 00:22:30.942 "trsvcid": "4420" 00:22:30.942 }, 00:22:30.942 "peer_address": { 00:22:30.942 "trtype": "TCP", 00:22:30.942 "adrfam": "IPv4", 00:22:30.942 "traddr": "10.0.0.1", 00:22:30.942 "trsvcid": "52378" 00:22:30.942 }, 00:22:30.942 "auth": { 00:22:30.942 "state": "completed", 00:22:30.942 "digest": "sha384", 00:22:30.942 "dhgroup": "ffdhe4096" 00:22:30.942 } 00:22:30.942 } 00:22:30.942 ]' 00:22:30.942 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.942 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:30.942 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.942 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:30.942 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.942 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.942 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.942 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.202 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:22:31.202 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:22:31.774 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.774 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:31.774 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.774 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.774 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.774 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.774 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:31.774 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:32.034 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:22:32.034 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.034 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:32.034 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:32.034 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:32.034 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.035 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.035 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.035 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.035 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.035 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.035 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.035 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.295 00:22:32.295 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.295 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.295 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.556 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.556 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.556 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.556 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.556 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.556 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.556 { 00:22:32.556 "cntlid": 75, 00:22:32.556 "qid": 0, 00:22:32.556 "state": "enabled", 00:22:32.556 "thread": "nvmf_tgt_poll_group_000", 00:22:32.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:32.556 "listen_address": { 00:22:32.556 "trtype": "TCP", 00:22:32.556 "adrfam": "IPv4", 00:22:32.556 "traddr": "10.0.0.2", 00:22:32.556 "trsvcid": "4420" 00:22:32.556 }, 00:22:32.556 "peer_address": { 00:22:32.556 "trtype": "TCP", 00:22:32.556 "adrfam": "IPv4", 00:22:32.556 "traddr": "10.0.0.1", 00:22:32.556 "trsvcid": "52408" 00:22:32.556 }, 00:22:32.556 "auth": { 00:22:32.556 "state": "completed", 00:22:32.556 "digest": "sha384", 00:22:32.556 "dhgroup": "ffdhe4096" 00:22:32.556 } 00:22:32.556 } 00:22:32.556 ]' 00:22:32.556 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.556 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:32.556 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.556 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:32.556 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.556 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.556 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.556 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.817 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:22:32.817 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:22:33.387 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.387 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:33.387 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.387 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.387 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.387 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.387 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:33.387 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:33.648 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:22:33.648 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.648 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:33.648 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:33.648 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:33.648 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.648 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.648 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.648 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.648 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.648 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.648 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.648 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.909 00:22:33.909 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.909 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.909 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:34.169 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.169 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.169 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.169 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.169 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.169 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:34.169 { 00:22:34.169 "cntlid": 77, 00:22:34.169 "qid": 0, 00:22:34.169 "state": "enabled", 00:22:34.169 "thread": "nvmf_tgt_poll_group_000", 00:22:34.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:34.169 "listen_address": { 00:22:34.169 "trtype": "TCP", 00:22:34.169 "adrfam": "IPv4", 00:22:34.169 "traddr": "10.0.0.2", 00:22:34.169 "trsvcid": "4420" 00:22:34.169 }, 00:22:34.169 "peer_address": { 00:22:34.169 "trtype": "TCP", 00:22:34.169 "adrfam": "IPv4", 00:22:34.169 "traddr": "10.0.0.1", 00:22:34.169 "trsvcid": "52434" 00:22:34.169 }, 00:22:34.169 "auth": { 00:22:34.169 "state": "completed", 00:22:34.169 "digest": "sha384", 00:22:34.169 "dhgroup": "ffdhe4096" 00:22:34.169 } 00:22:34.169 } 00:22:34.169 ]' 00:22:34.169 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:34.169 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:34.169 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:34.169 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:34.169 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:34.169 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.169 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.169 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.430 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:22:34.430 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:22:35.004 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.004 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:35.004 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.004 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.004 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.004 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:35.004 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:35.004 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:35.265 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:22:35.265 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:35.265 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:35.265 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:35.265 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:35.265 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.265 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:35.265 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.265 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.265 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.265 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:35.265 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:35.265 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:35.525 00:22:35.525 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.525 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.525 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.786 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.786 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.786 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.786 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.786 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.786 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.786 { 00:22:35.786 "cntlid": 79, 00:22:35.786 "qid": 0, 00:22:35.786 "state": "enabled", 00:22:35.786 "thread": "nvmf_tgt_poll_group_000", 00:22:35.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:35.786 "listen_address": { 00:22:35.786 "trtype": "TCP", 00:22:35.786 "adrfam": "IPv4", 00:22:35.786 "traddr": "10.0.0.2", 00:22:35.786 "trsvcid": "4420" 00:22:35.786 }, 00:22:35.786 "peer_address": { 00:22:35.786 "trtype": "TCP", 00:22:35.786 "adrfam": "IPv4", 00:22:35.786 "traddr": "10.0.0.1", 00:22:35.786 "trsvcid": "52470" 00:22:35.786 }, 00:22:35.786 "auth": { 00:22:35.786 "state": "completed", 00:22:35.786 "digest": "sha384", 00:22:35.786 "dhgroup": "ffdhe4096" 00:22:35.786 } 00:22:35.786 } 00:22:35.786 ]' 00:22:35.786 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.786 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:35.786 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.786 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:35.786 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.786 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.786 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.786 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.047 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:22:36.047 14:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:22:36.618 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.618 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:36.618 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.618 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.618 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.619 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:36.619 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.619 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:36.619 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:36.879 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:22:36.879 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.879 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:36.879 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:36.879 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:36.879 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.879 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.879 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.879 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.879 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.879 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.879 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.879 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.139 00:22:37.139 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.139 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.139 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.399 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.399 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.399 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.399 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.399 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.399 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.399 { 00:22:37.399 "cntlid": 81, 00:22:37.399 "qid": 0, 00:22:37.399 "state": "enabled", 00:22:37.399 "thread": "nvmf_tgt_poll_group_000", 00:22:37.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:37.399 "listen_address": { 00:22:37.399 "trtype": "TCP", 00:22:37.399 "adrfam": "IPv4", 00:22:37.399 "traddr": "10.0.0.2", 00:22:37.399 "trsvcid": "4420" 00:22:37.399 }, 00:22:37.399 "peer_address": { 00:22:37.399 "trtype": "TCP", 00:22:37.399 "adrfam": "IPv4", 00:22:37.399 "traddr": "10.0.0.1", 00:22:37.399 "trsvcid": "52504" 00:22:37.399 }, 00:22:37.399 "auth": { 00:22:37.399 "state": "completed", 00:22:37.399 "digest": "sha384", 00:22:37.399 "dhgroup": "ffdhe6144" 00:22:37.399 } 00:22:37.399 } 00:22:37.399 ]' 00:22:37.399 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.399 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:37.399 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.659 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:37.659 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.659 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.659 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.659 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.919 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:22:37.919 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:22:38.490 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.490 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:38.490 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.490 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.490 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.490 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:38.490 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:38.490 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:38.750 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:22:38.750 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:38.750 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:38.750 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:38.750 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:38.750 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.750 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.750 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.750 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.750 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.750 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.750 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.750 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.011 00:22:39.011 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:39.011 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:39.011 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.271 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.271 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.271 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.271 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.271 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.271 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:39.271 { 00:22:39.271 "cntlid": 83, 00:22:39.271 "qid": 0, 00:22:39.271 "state": "enabled", 00:22:39.271 "thread": "nvmf_tgt_poll_group_000", 00:22:39.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:39.271 "listen_address": { 00:22:39.271 "trtype": "TCP", 00:22:39.271 "adrfam": "IPv4", 00:22:39.271 "traddr": "10.0.0.2", 00:22:39.271 "trsvcid": "4420" 00:22:39.271 }, 00:22:39.271 "peer_address": { 00:22:39.271 "trtype": "TCP", 00:22:39.271 "adrfam": "IPv4", 00:22:39.271 "traddr": "10.0.0.1", 00:22:39.271 "trsvcid": "54330" 00:22:39.271 }, 00:22:39.271 "auth": { 00:22:39.271 "state": "completed", 00:22:39.271 "digest": "sha384", 00:22:39.271 "dhgroup": "ffdhe6144" 00:22:39.271 } 00:22:39.271 } 00:22:39.271 ]' 00:22:39.271 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:39.271 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:39.271 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:39.271 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:39.271 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:39.271 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.271 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.271 14:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.531 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:22:39.532 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:22:40.103 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.103 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:40.103 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.104 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.104 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.104 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:40.104 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:40.104 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:40.364 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:22:40.364 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:40.364 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:40.364 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:40.364 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:40.364 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.364 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.364 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.364 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.364 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.364 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.364 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.364 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.625 00:22:40.625 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:40.625 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:40.625 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.885 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.885 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.885 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.885 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.885 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.885 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.885 { 00:22:40.885 "cntlid": 85, 00:22:40.885 "qid": 0, 00:22:40.885 "state": "enabled", 00:22:40.885 "thread": "nvmf_tgt_poll_group_000", 00:22:40.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:40.885 "listen_address": { 00:22:40.885 "trtype": "TCP", 00:22:40.885 "adrfam": "IPv4", 00:22:40.885 "traddr": "10.0.0.2", 00:22:40.885 "trsvcid": "4420" 00:22:40.885 }, 00:22:40.885 "peer_address": { 00:22:40.885 "trtype": "TCP", 00:22:40.885 "adrfam": "IPv4", 00:22:40.885 "traddr": "10.0.0.1", 00:22:40.885 "trsvcid": "54354" 00:22:40.885 }, 00:22:40.885 "auth": { 00:22:40.885 "state": "completed", 00:22:40.885 "digest": "sha384", 00:22:40.885 "dhgroup": "ffdhe6144" 00:22:40.885 } 00:22:40.885 } 00:22:40.885 ]' 00:22:40.885 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.885 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:40.885 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:41.146 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:41.146 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:41.146 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.146 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.146 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.146 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:22:41.146 14:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:22:42.088 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.088 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:42.088 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.088 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.088 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.088 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:42.089 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:42.089 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:42.089 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:22:42.089 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:42.089 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:42.089 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:42.089 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:42.089 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.089 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:42.089 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.089 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.089 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.089 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:42.089 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:42.089 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:42.384 00:22:42.384 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:42.384 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:42.384 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.696 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.696 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.696 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.696 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.696 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.696 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:42.696 { 00:22:42.696 "cntlid": 87, 00:22:42.696 "qid": 0, 00:22:42.696 "state": "enabled", 00:22:42.696 "thread": "nvmf_tgt_poll_group_000", 00:22:42.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:42.696 "listen_address": { 00:22:42.696 "trtype": "TCP", 00:22:42.696 "adrfam": "IPv4", 00:22:42.696 "traddr": "10.0.0.2", 00:22:42.696 "trsvcid": "4420" 00:22:42.696 }, 00:22:42.696 "peer_address": { 00:22:42.696 "trtype": "TCP", 00:22:42.696 "adrfam": "IPv4", 00:22:42.696 "traddr": "10.0.0.1", 00:22:42.696 "trsvcid": "54386" 00:22:42.696 }, 00:22:42.696 "auth": { 00:22:42.696 "state": "completed", 00:22:42.696 "digest": "sha384", 00:22:42.696 "dhgroup": "ffdhe6144" 00:22:42.696 } 00:22:42.696 } 00:22:42.696 ]' 00:22:42.696 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:42.696 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:42.696 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:42.696 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:42.697 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:42.697 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.697 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.697 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.957 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:22:42.957 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:22:43.528 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.528 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:43.528 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.528 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.528 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.528 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:43.529 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:43.529 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:43.529 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:43.788 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:22:43.788 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:43.788 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:43.788 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:43.788 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:43.788 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.788 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.788 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.788 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.788 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.788 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.788 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.788 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.359 00:22:44.359 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:44.359 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:44.359 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.359 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.359 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.359 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.359 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.619 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.619 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:44.619 { 00:22:44.619 "cntlid": 89, 00:22:44.619 "qid": 0, 00:22:44.619 "state": "enabled", 00:22:44.619 "thread": "nvmf_tgt_poll_group_000", 00:22:44.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:44.619 "listen_address": { 00:22:44.619 "trtype": "TCP", 00:22:44.619 "adrfam": "IPv4", 00:22:44.619 "traddr": "10.0.0.2", 00:22:44.619 "trsvcid": "4420" 00:22:44.619 }, 00:22:44.619 "peer_address": { 00:22:44.619 "trtype": "TCP", 00:22:44.619 "adrfam": "IPv4", 00:22:44.619 "traddr": "10.0.0.1", 00:22:44.619 "trsvcid": "54420" 00:22:44.619 }, 00:22:44.619 "auth": { 00:22:44.619 "state": "completed", 00:22:44.619 "digest": "sha384", 00:22:44.619 "dhgroup": "ffdhe8192" 00:22:44.619 } 00:22:44.619 } 00:22:44.619 ]' 00:22:44.619 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:44.619 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:44.619 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:44.619 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:44.619 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:44.619 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.619 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.619 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.880 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:22:44.880 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:22:45.449 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.449 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:45.449 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.449 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.449 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.449 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:45.449 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:45.449 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:45.708 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:22:45.708 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:45.708 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:45.708 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:45.708 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:45.708 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.708 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.708 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.708 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.708 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.708 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.708 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.708 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.278 00:22:46.278 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.278 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.278 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.278 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.278 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.278 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.278 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.278 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.278 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.278 { 00:22:46.278 "cntlid": 91, 00:22:46.278 "qid": 0, 00:22:46.278 "state": "enabled", 00:22:46.278 "thread": "nvmf_tgt_poll_group_000", 00:22:46.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:46.278 "listen_address": { 00:22:46.278 "trtype": "TCP", 00:22:46.278 "adrfam": "IPv4", 00:22:46.278 "traddr": "10.0.0.2", 00:22:46.278 "trsvcid": "4420" 00:22:46.278 }, 00:22:46.278 "peer_address": { 00:22:46.278 "trtype": "TCP", 00:22:46.278 "adrfam": "IPv4", 00:22:46.278 "traddr": "10.0.0.1", 00:22:46.278 "trsvcid": "54450" 00:22:46.278 }, 00:22:46.278 "auth": { 00:22:46.278 "state": "completed", 00:22:46.278 "digest": "sha384", 00:22:46.278 "dhgroup": "ffdhe8192" 00:22:46.278 } 00:22:46.278 } 00:22:46.278 ]' 00:22:46.278 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:46.278 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:46.539 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.539 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:46.539 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.539 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.539 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.539 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.539 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:22:46.539 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:22:47.480 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.480 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:47.480 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.480 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.480 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.480 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:47.480 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:47.480 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:47.480 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:22:47.480 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.480 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:47.480 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:47.480 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:47.480 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.480 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.480 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.480 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.480 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.480 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.480 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.480 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.049 00:22:48.049 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:48.049 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.050 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.310 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.310 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.310 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.310 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.310 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.310 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.310 { 00:22:48.310 "cntlid": 93, 00:22:48.310 "qid": 0, 00:22:48.310 "state": "enabled", 00:22:48.310 "thread": "nvmf_tgt_poll_group_000", 00:22:48.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:48.310 "listen_address": { 00:22:48.310 "trtype": "TCP", 00:22:48.310 "adrfam": "IPv4", 00:22:48.310 "traddr": "10.0.0.2", 00:22:48.310 "trsvcid": "4420" 00:22:48.310 }, 00:22:48.310 "peer_address": { 00:22:48.310 "trtype": "TCP", 00:22:48.310 "adrfam": "IPv4", 00:22:48.310 "traddr": "10.0.0.1", 00:22:48.310 "trsvcid": "39922" 00:22:48.310 }, 00:22:48.310 "auth": { 00:22:48.310 "state": "completed", 00:22:48.310 "digest": "sha384", 00:22:48.310 "dhgroup": "ffdhe8192" 00:22:48.310 } 00:22:48.310 } 00:22:48.310 ]' 00:22:48.310 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.310 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:48.310 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.310 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:48.310 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.310 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.310 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.310 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.570 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:22:48.570 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:22:49.140 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.140 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:49.140 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.140 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.140 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.140 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:49.140 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:49.140 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:49.400 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:22:49.400 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.400 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:49.400 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:49.400 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:49.400 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.400 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:49.400 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.400 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.400 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.400 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:49.400 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:49.400 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:49.971 00:22:49.971 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:49.971 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:49.971 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.971 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.971 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.971 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.971 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.971 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.971 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:49.971 { 00:22:49.971 "cntlid": 95, 00:22:49.971 "qid": 0, 00:22:49.971 "state": "enabled", 00:22:49.971 "thread": "nvmf_tgt_poll_group_000", 00:22:49.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:49.971 "listen_address": { 00:22:49.971 "trtype": "TCP", 00:22:49.971 "adrfam": "IPv4", 00:22:49.971 "traddr": "10.0.0.2", 00:22:49.971 "trsvcid": "4420" 00:22:49.971 }, 00:22:49.971 "peer_address": { 00:22:49.971 "trtype": "TCP", 00:22:49.971 "adrfam": "IPv4", 00:22:49.971 "traddr": "10.0.0.1", 00:22:49.971 "trsvcid": "39948" 00:22:49.971 }, 00:22:49.971 "auth": { 00:22:49.971 "state": "completed", 00:22:49.971 "digest": "sha384", 00:22:49.971 "dhgroup": "ffdhe8192" 00:22:49.971 } 00:22:49.971 } 00:22:49.971 ]' 00:22:49.971 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:49.971 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:49.971 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:50.231 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:50.231 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:50.231 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.231 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.231 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.491 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:22:50.491 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:22:51.061 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.061 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:51.061 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.061 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.061 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.061 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:51.061 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:51.061 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:51.061 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:51.061 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:51.322 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:22:51.322 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.322 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:51.322 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:51.322 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:51.322 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.322 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.322 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.322 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.322 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.322 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.322 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.322 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.322 00:22:51.582 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.582 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.582 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.582 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.582 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.582 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.582 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.582 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.582 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:51.582 { 00:22:51.582 "cntlid": 97, 00:22:51.582 "qid": 0, 00:22:51.582 "state": "enabled", 00:22:51.582 "thread": "nvmf_tgt_poll_group_000", 00:22:51.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:51.582 "listen_address": { 00:22:51.582 "trtype": "TCP", 00:22:51.582 "adrfam": "IPv4", 00:22:51.582 "traddr": "10.0.0.2", 00:22:51.582 "trsvcid": "4420" 00:22:51.582 }, 00:22:51.582 "peer_address": { 00:22:51.582 "trtype": "TCP", 00:22:51.582 "adrfam": "IPv4", 00:22:51.582 "traddr": "10.0.0.1", 00:22:51.582 "trsvcid": "39968" 00:22:51.582 }, 00:22:51.582 "auth": { 00:22:51.582 "state": "completed", 00:22:51.582 "digest": "sha512", 00:22:51.582 "dhgroup": "null" 00:22:51.582 } 00:22:51.582 } 00:22:51.582 ]' 00:22:51.582 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:51.582 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:51.582 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:51.842 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:51.842 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:51.842 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.842 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.842 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.102 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:22:52.102 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:22:52.671 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.671 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:52.671 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.671 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.671 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.671 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:52.671 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:52.671 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:52.932 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:22:52.932 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:52.932 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:52.932 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:52.932 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:52.932 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.932 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.932 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.932 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.932 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.932 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.932 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.932 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.932 00:22:53.192 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.192 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.192 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.192 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.192 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.192 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.192 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.192 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.192 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.192 { 00:22:53.192 "cntlid": 99, 00:22:53.192 "qid": 0, 00:22:53.192 "state": "enabled", 00:22:53.192 "thread": "nvmf_tgt_poll_group_000", 00:22:53.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:53.192 "listen_address": { 00:22:53.192 "trtype": "TCP", 00:22:53.192 "adrfam": "IPv4", 00:22:53.192 "traddr": "10.0.0.2", 00:22:53.192 "trsvcid": "4420" 00:22:53.192 }, 00:22:53.192 "peer_address": { 00:22:53.192 "trtype": "TCP", 00:22:53.192 "adrfam": "IPv4", 00:22:53.192 "traddr": "10.0.0.1", 00:22:53.192 "trsvcid": "39994" 00:22:53.192 }, 00:22:53.192 "auth": { 00:22:53.192 "state": "completed", 00:22:53.192 "digest": "sha512", 00:22:53.192 "dhgroup": "null" 00:22:53.192 } 00:22:53.192 } 00:22:53.192 ]' 00:22:53.192 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.192 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:53.192 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:53.453 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:53.453 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:53.453 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.453 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.453 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.453 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:22:53.453 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.393 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.652 00:22:54.652 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:54.652 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:54.652 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.912 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.912 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.912 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.912 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.912 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.912 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:54.912 { 00:22:54.912 "cntlid": 101, 00:22:54.912 "qid": 0, 00:22:54.912 "state": "enabled", 00:22:54.912 "thread": "nvmf_tgt_poll_group_000", 00:22:54.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:54.912 "listen_address": { 00:22:54.912 "trtype": "TCP", 00:22:54.912 "adrfam": "IPv4", 00:22:54.912 "traddr": "10.0.0.2", 00:22:54.912 "trsvcid": "4420" 00:22:54.912 }, 00:22:54.912 "peer_address": { 00:22:54.912 "trtype": "TCP", 00:22:54.912 "adrfam": "IPv4", 00:22:54.912 "traddr": "10.0.0.1", 00:22:54.912 "trsvcid": "40028" 00:22:54.912 }, 00:22:54.912 "auth": { 00:22:54.912 "state": "completed", 00:22:54.912 "digest": "sha512", 00:22:54.912 "dhgroup": "null" 00:22:54.912 } 00:22:54.912 } 00:22:54.912 ]' 00:22:54.912 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:54.912 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:54.912 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:54.912 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:54.912 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:54.912 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.912 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.912 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.173 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:22:55.173 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:22:55.743 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.743 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:55.743 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.743 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.743 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.743 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:55.743 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:55.743 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:56.002 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:22:56.003 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:56.003 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:56.003 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:56.003 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:56.003 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:56.003 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:56.003 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.003 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.003 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.003 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:56.003 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:56.003 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:56.262 00:22:56.262 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:56.262 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:56.262 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.522 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.522 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.522 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.522 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.522 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.522 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:56.522 { 00:22:56.522 "cntlid": 103, 00:22:56.522 "qid": 0, 00:22:56.522 "state": "enabled", 00:22:56.522 "thread": "nvmf_tgt_poll_group_000", 00:22:56.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:56.522 "listen_address": { 00:22:56.522 "trtype": "TCP", 00:22:56.522 "adrfam": "IPv4", 00:22:56.522 "traddr": "10.0.0.2", 00:22:56.522 "trsvcid": "4420" 00:22:56.522 }, 00:22:56.522 "peer_address": { 00:22:56.522 "trtype": "TCP", 00:22:56.522 "adrfam": "IPv4", 00:22:56.522 "traddr": "10.0.0.1", 00:22:56.522 "trsvcid": "40066" 00:22:56.522 }, 00:22:56.522 "auth": { 00:22:56.522 "state": "completed", 00:22:56.522 "digest": "sha512", 00:22:56.522 "dhgroup": "null" 00:22:56.522 } 00:22:56.522 } 00:22:56.522 ]' 00:22:56.522 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:56.522 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:56.522 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:56.522 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:56.522 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:56.522 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.522 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.522 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.782 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:22:56.782 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:22:57.352 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.352 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:57.352 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.352 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.352 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.352 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:57.352 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:57.352 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:57.352 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:57.612 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:22:57.612 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:57.612 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:57.612 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:57.612 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:57.612 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.612 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.612 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.612 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.612 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.612 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.612 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.612 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.871 00:22:57.871 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:57.871 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.871 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:57.871 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.871 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.871 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.871 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.131 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.131 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:58.131 { 00:22:58.131 "cntlid": 105, 00:22:58.131 "qid": 0, 00:22:58.131 "state": "enabled", 00:22:58.131 "thread": "nvmf_tgt_poll_group_000", 00:22:58.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:58.131 "listen_address": { 00:22:58.131 "trtype": "TCP", 00:22:58.131 "adrfam": "IPv4", 00:22:58.131 "traddr": "10.0.0.2", 00:22:58.131 "trsvcid": "4420" 00:22:58.131 }, 00:22:58.131 "peer_address": { 00:22:58.131 "trtype": "TCP", 00:22:58.131 "adrfam": "IPv4", 00:22:58.131 "traddr": "10.0.0.1", 00:22:58.131 "trsvcid": "39272" 00:22:58.131 }, 00:22:58.131 "auth": { 00:22:58.131 "state": "completed", 00:22:58.131 "digest": "sha512", 00:22:58.131 "dhgroup": "ffdhe2048" 00:22:58.131 } 00:22:58.131 } 00:22:58.131 ]' 00:22:58.131 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:58.131 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:58.131 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:58.131 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:58.131 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:58.131 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.131 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.131 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.391 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:22:58.391 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:22:58.959 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.959 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:58.960 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.960 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.960 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.960 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:58.960 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:58.960 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:59.220 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:22:59.220 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:59.220 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:59.220 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:59.220 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:59.220 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.220 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.220 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.220 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.220 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.220 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.220 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.220 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.479 00:22:59.479 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:59.479 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:59.479 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.739 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.739 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.739 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.739 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.739 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.739 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:59.739 { 00:22:59.739 "cntlid": 107, 00:22:59.739 "qid": 0, 00:22:59.739 "state": "enabled", 00:22:59.739 "thread": "nvmf_tgt_poll_group_000", 00:22:59.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:59.739 "listen_address": { 00:22:59.739 "trtype": "TCP", 00:22:59.739 "adrfam": "IPv4", 00:22:59.739 "traddr": "10.0.0.2", 00:22:59.739 "trsvcid": "4420" 00:22:59.739 }, 00:22:59.739 "peer_address": { 00:22:59.739 "trtype": "TCP", 00:22:59.739 "adrfam": "IPv4", 00:22:59.739 "traddr": "10.0.0.1", 00:22:59.739 "trsvcid": "39288" 00:22:59.739 }, 00:22:59.739 "auth": { 00:22:59.739 "state": "completed", 00:22:59.739 "digest": "sha512", 00:22:59.739 "dhgroup": "ffdhe2048" 00:22:59.739 } 00:22:59.739 } 00:22:59.739 ]' 00:22:59.739 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:59.739 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:59.739 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:59.739 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:59.739 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:59.739 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.739 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.739 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:00.000 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:23:00.000 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:23:00.570 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.570 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:00.570 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.570 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.570 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.570 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:00.570 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:00.570 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:00.831 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:23:00.831 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:00.831 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:00.831 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:00.831 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:00.831 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.831 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.831 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.831 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.831 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.831 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.831 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.831 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.091 00:23:01.091 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:01.091 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:01.091 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.351 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.351 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.351 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.351 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.351 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.351 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:01.351 { 00:23:01.351 "cntlid": 109, 00:23:01.351 "qid": 0, 00:23:01.351 "state": "enabled", 00:23:01.351 "thread": "nvmf_tgt_poll_group_000", 00:23:01.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:01.351 "listen_address": { 00:23:01.351 "trtype": "TCP", 00:23:01.351 "adrfam": "IPv4", 00:23:01.351 "traddr": "10.0.0.2", 00:23:01.351 "trsvcid": "4420" 00:23:01.351 }, 00:23:01.351 "peer_address": { 00:23:01.351 "trtype": "TCP", 00:23:01.351 "adrfam": "IPv4", 00:23:01.351 "traddr": "10.0.0.1", 00:23:01.351 "trsvcid": "39312" 00:23:01.351 }, 00:23:01.351 "auth": { 00:23:01.351 "state": "completed", 00:23:01.351 "digest": "sha512", 00:23:01.351 "dhgroup": "ffdhe2048" 00:23:01.351 } 00:23:01.351 } 00:23:01.351 ]' 00:23:01.351 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:01.351 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:01.351 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:01.351 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:01.351 14:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:01.351 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.351 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.351 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.612 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:23:01.612 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:23:02.181 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.181 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:02.181 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.181 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.441 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.441 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:02.441 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:02.441 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:02.442 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:23:02.442 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:02.442 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:02.442 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:02.442 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:02.442 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.442 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:02.442 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.442 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.442 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.442 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:02.442 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:02.442 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:02.702 00:23:02.702 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:02.702 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:02.702 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.962 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.962 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.962 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.962 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.962 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.962 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:02.962 { 00:23:02.962 "cntlid": 111, 00:23:02.962 "qid": 0, 00:23:02.962 "state": "enabled", 00:23:02.962 "thread": "nvmf_tgt_poll_group_000", 00:23:02.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:02.962 "listen_address": { 00:23:02.962 "trtype": "TCP", 00:23:02.962 "adrfam": "IPv4", 00:23:02.962 "traddr": "10.0.0.2", 00:23:02.962 "trsvcid": "4420" 00:23:02.962 }, 00:23:02.962 "peer_address": { 00:23:02.962 "trtype": "TCP", 00:23:02.962 "adrfam": "IPv4", 00:23:02.962 "traddr": "10.0.0.1", 00:23:02.962 "trsvcid": "39328" 00:23:02.962 }, 00:23:02.962 "auth": { 00:23:02.962 "state": "completed", 00:23:02.962 "digest": "sha512", 00:23:02.962 "dhgroup": "ffdhe2048" 00:23:02.962 } 00:23:02.962 } 00:23:02.962 ]' 00:23:02.962 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:02.962 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:02.962 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:02.962 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:02.962 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:02.962 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.962 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.962 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.221 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:23:03.221 14:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:23:03.851 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.851 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:03.851 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.851 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.851 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.851 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:03.851 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:03.851 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:03.851 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:04.112 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:23:04.112 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:04.112 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:04.112 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:04.112 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:04.112 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:04.112 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.112 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.112 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.112 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.112 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.112 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.112 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.378 00:23:04.378 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:04.378 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:04.378 14:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.640 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.640 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.640 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.640 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.640 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.640 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:04.640 { 00:23:04.640 "cntlid": 113, 00:23:04.640 "qid": 0, 00:23:04.640 "state": "enabled", 00:23:04.640 "thread": "nvmf_tgt_poll_group_000", 00:23:04.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:04.640 "listen_address": { 00:23:04.640 "trtype": "TCP", 00:23:04.640 "adrfam": "IPv4", 00:23:04.640 "traddr": "10.0.0.2", 00:23:04.640 "trsvcid": "4420" 00:23:04.640 }, 00:23:04.640 "peer_address": { 00:23:04.640 "trtype": "TCP", 00:23:04.640 "adrfam": "IPv4", 00:23:04.640 "traddr": "10.0.0.1", 00:23:04.640 "trsvcid": "39348" 00:23:04.640 }, 00:23:04.640 "auth": { 00:23:04.640 "state": "completed", 00:23:04.640 "digest": "sha512", 00:23:04.640 "dhgroup": "ffdhe3072" 00:23:04.640 } 00:23:04.640 } 00:23:04.640 ]' 00:23:04.640 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:04.640 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:04.640 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:04.640 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:04.640 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:04.640 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.640 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.640 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.899 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:23:04.899 14:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:23:05.467 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.467 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:05.467 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.467 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.467 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.467 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:05.467 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:05.467 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:05.727 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:23:05.727 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:05.727 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:05.727 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:05.727 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:05.727 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.727 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.727 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.727 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.727 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.727 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.727 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.727 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.988 00:23:05.988 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:05.988 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:05.988 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.248 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.248 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.248 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.248 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.248 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.248 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:06.248 { 00:23:06.248 "cntlid": 115, 00:23:06.248 "qid": 0, 00:23:06.248 "state": "enabled", 00:23:06.248 "thread": "nvmf_tgt_poll_group_000", 00:23:06.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:06.248 "listen_address": { 00:23:06.248 "trtype": "TCP", 00:23:06.248 "adrfam": "IPv4", 00:23:06.248 "traddr": "10.0.0.2", 00:23:06.248 "trsvcid": "4420" 00:23:06.248 }, 00:23:06.248 "peer_address": { 00:23:06.248 "trtype": "TCP", 00:23:06.248 "adrfam": "IPv4", 00:23:06.248 "traddr": "10.0.0.1", 00:23:06.248 "trsvcid": "39378" 00:23:06.248 }, 00:23:06.248 "auth": { 00:23:06.248 "state": "completed", 00:23:06.248 "digest": "sha512", 00:23:06.248 "dhgroup": "ffdhe3072" 00:23:06.248 } 00:23:06.248 } 00:23:06.248 ]' 00:23:06.248 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:06.248 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:06.248 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:06.248 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:06.248 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:06.248 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.248 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.248 14:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.508 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:23:06.508 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:23:07.081 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:07.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:07.081 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:07.081 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.081 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.081 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.081 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:07.081 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:07.081 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:07.341 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:23:07.341 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:07.341 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:07.341 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:07.341 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:07.341 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:07.341 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.341 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.341 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.341 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.341 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.341 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.341 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.602 00:23:07.602 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:07.602 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:07.602 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.861 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.861 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.861 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.861 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.861 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.861 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:07.861 { 00:23:07.861 "cntlid": 117, 00:23:07.861 "qid": 0, 00:23:07.861 "state": "enabled", 00:23:07.861 "thread": "nvmf_tgt_poll_group_000", 00:23:07.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:07.861 "listen_address": { 00:23:07.861 "trtype": "TCP", 00:23:07.861 "adrfam": "IPv4", 00:23:07.861 "traddr": "10.0.0.2", 00:23:07.861 "trsvcid": "4420" 00:23:07.861 }, 00:23:07.861 "peer_address": { 00:23:07.861 "trtype": "TCP", 00:23:07.861 "adrfam": "IPv4", 00:23:07.861 "traddr": "10.0.0.1", 00:23:07.861 "trsvcid": "37888" 00:23:07.861 }, 00:23:07.861 "auth": { 00:23:07.861 "state": "completed", 00:23:07.861 "digest": "sha512", 00:23:07.861 "dhgroup": "ffdhe3072" 00:23:07.861 } 00:23:07.861 } 00:23:07.861 ]' 00:23:07.861 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:07.861 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:07.861 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:07.861 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:07.861 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:07.861 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.861 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.862 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:08.121 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:23:08.122 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:23:08.691 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.691 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:08.691 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.691 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.691 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.691 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:08.691 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:08.691 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:08.951 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:23:08.951 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:08.951 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:08.951 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:08.951 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:08.951 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.951 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:08.951 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.951 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.951 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.951 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:08.951 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:08.951 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:09.211 00:23:09.211 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:09.211 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:09.211 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.471 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.471 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.471 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.471 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.471 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.471 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:09.471 { 00:23:09.471 "cntlid": 119, 00:23:09.471 "qid": 0, 00:23:09.471 "state": "enabled", 00:23:09.471 "thread": "nvmf_tgt_poll_group_000", 00:23:09.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:09.471 "listen_address": { 00:23:09.471 "trtype": "TCP", 00:23:09.471 "adrfam": "IPv4", 00:23:09.471 "traddr": "10.0.0.2", 00:23:09.471 "trsvcid": "4420" 00:23:09.471 }, 00:23:09.471 "peer_address": { 00:23:09.471 "trtype": "TCP", 00:23:09.471 "adrfam": "IPv4", 00:23:09.471 "traddr": "10.0.0.1", 00:23:09.471 "trsvcid": "37894" 00:23:09.471 }, 00:23:09.471 "auth": { 00:23:09.471 "state": "completed", 00:23:09.471 "digest": "sha512", 00:23:09.471 "dhgroup": "ffdhe3072" 00:23:09.471 } 00:23:09.471 } 00:23:09.471 ]' 00:23:09.471 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:09.471 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:09.471 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:09.472 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:09.472 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:09.472 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.472 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.472 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.731 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:23:09.731 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:23:10.302 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.302 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:10.302 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.302 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.302 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.302 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:10.302 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:10.302 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:10.302 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:10.562 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:23:10.562 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:10.562 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:10.562 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:10.562 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:10.562 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:10.562 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.562 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.562 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.562 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.562 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.562 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.562 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.821 00:23:10.821 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:10.821 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:10.822 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.082 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.082 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.082 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.082 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.082 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.082 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:11.082 { 00:23:11.082 "cntlid": 121, 00:23:11.082 "qid": 0, 00:23:11.082 "state": "enabled", 00:23:11.082 "thread": "nvmf_tgt_poll_group_000", 00:23:11.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:11.082 "listen_address": { 00:23:11.082 "trtype": "TCP", 00:23:11.082 "adrfam": "IPv4", 00:23:11.082 "traddr": "10.0.0.2", 00:23:11.082 "trsvcid": "4420" 00:23:11.082 }, 00:23:11.082 "peer_address": { 00:23:11.082 "trtype": "TCP", 00:23:11.082 "adrfam": "IPv4", 00:23:11.082 "traddr": "10.0.0.1", 00:23:11.082 "trsvcid": "37940" 00:23:11.082 }, 00:23:11.082 "auth": { 00:23:11.082 "state": "completed", 00:23:11.082 "digest": "sha512", 00:23:11.082 "dhgroup": "ffdhe4096" 00:23:11.082 } 00:23:11.082 } 00:23:11.082 ]' 00:23:11.082 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:11.082 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:11.082 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:11.082 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:11.082 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:11.082 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:11.082 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:11.082 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:11.342 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:23:11.342 14:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:23:11.913 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.913 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:11.913 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.913 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.913 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.913 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:11.913 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:11.913 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:12.172 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:23:12.172 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:12.172 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:12.172 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:12.172 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:12.172 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:12.172 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.172 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.172 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.172 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.172 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.172 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.172 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.433 00:23:12.433 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:12.433 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:12.433 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.693 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.693 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:12.693 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.693 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.693 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.693 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:12.693 { 00:23:12.693 "cntlid": 123, 00:23:12.693 "qid": 0, 00:23:12.693 "state": "enabled", 00:23:12.693 "thread": "nvmf_tgt_poll_group_000", 00:23:12.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:12.693 "listen_address": { 00:23:12.693 "trtype": "TCP", 00:23:12.693 "adrfam": "IPv4", 00:23:12.693 "traddr": "10.0.0.2", 00:23:12.693 "trsvcid": "4420" 00:23:12.693 }, 00:23:12.693 "peer_address": { 00:23:12.693 "trtype": "TCP", 00:23:12.693 "adrfam": "IPv4", 00:23:12.693 "traddr": "10.0.0.1", 00:23:12.693 "trsvcid": "37976" 00:23:12.693 }, 00:23:12.693 "auth": { 00:23:12.693 "state": "completed", 00:23:12.693 "digest": "sha512", 00:23:12.693 "dhgroup": "ffdhe4096" 00:23:12.693 } 00:23:12.693 } 00:23:12.693 ]' 00:23:12.693 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:12.693 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:12.693 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:12.693 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:12.693 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:12.953 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:12.953 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.953 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.953 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:23:12.953 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:23:13.892 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.892 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:13.892 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.892 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.892 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.892 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:13.892 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:13.892 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:13.892 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:23:13.892 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:13.892 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:13.892 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:13.892 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:13.892 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.892 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.893 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.893 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.893 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.893 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.893 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.893 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:14.152 00:23:14.152 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:14.152 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:14.152 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.417 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.417 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:14.417 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.417 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.417 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.417 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:14.417 { 00:23:14.418 "cntlid": 125, 00:23:14.418 "qid": 0, 00:23:14.418 "state": "enabled", 00:23:14.418 "thread": "nvmf_tgt_poll_group_000", 00:23:14.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:14.418 "listen_address": { 00:23:14.418 "trtype": "TCP", 00:23:14.418 "adrfam": "IPv4", 00:23:14.418 "traddr": "10.0.0.2", 00:23:14.418 "trsvcid": "4420" 00:23:14.418 }, 00:23:14.418 "peer_address": { 00:23:14.418 "trtype": "TCP", 00:23:14.418 "adrfam": "IPv4", 00:23:14.418 "traddr": "10.0.0.1", 00:23:14.418 "trsvcid": "37992" 00:23:14.418 }, 00:23:14.418 "auth": { 00:23:14.418 "state": "completed", 00:23:14.418 "digest": "sha512", 00:23:14.418 "dhgroup": "ffdhe4096" 00:23:14.418 } 00:23:14.418 } 00:23:14.418 ]' 00:23:14.418 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:14.418 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:14.418 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:14.418 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:14.418 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:14.681 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:14.681 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.682 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.682 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:23:14.682 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:23:15.621 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:15.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:15.621 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:15.881 00:23:15.881 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:15.881 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:15.881 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.142 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.142 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:16.142 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.142 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.142 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.142 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:16.142 { 00:23:16.142 "cntlid": 127, 00:23:16.142 "qid": 0, 00:23:16.142 "state": "enabled", 00:23:16.142 "thread": "nvmf_tgt_poll_group_000", 00:23:16.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:16.142 "listen_address": { 00:23:16.142 "trtype": "TCP", 00:23:16.142 "adrfam": "IPv4", 00:23:16.142 "traddr": "10.0.0.2", 00:23:16.142 "trsvcid": "4420" 00:23:16.142 }, 00:23:16.142 "peer_address": { 00:23:16.142 "trtype": "TCP", 00:23:16.142 "adrfam": "IPv4", 00:23:16.142 "traddr": "10.0.0.1", 00:23:16.142 "trsvcid": "38016" 00:23:16.142 }, 00:23:16.142 "auth": { 00:23:16.142 "state": "completed", 00:23:16.142 "digest": "sha512", 00:23:16.142 "dhgroup": "ffdhe4096" 00:23:16.142 } 00:23:16.142 } 00:23:16.142 ]' 00:23:16.142 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:16.142 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:16.142 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:16.142 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:16.142 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:16.142 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:16.142 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.142 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.402 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:23:16.402 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:23:16.973 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.973 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:16.973 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.973 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.973 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.973 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.973 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:16.973 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:16.973 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:17.232 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:23:17.232 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:17.232 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:17.232 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:17.232 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:17.232 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:17.232 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.232 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.232 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.232 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.232 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.232 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.233 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.492 00:23:17.492 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:17.492 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:17.492 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.752 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.752 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.752 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.752 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.752 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.752 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:17.752 { 00:23:17.752 "cntlid": 129, 00:23:17.752 "qid": 0, 00:23:17.752 "state": "enabled", 00:23:17.752 "thread": "nvmf_tgt_poll_group_000", 00:23:17.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:17.752 "listen_address": { 00:23:17.752 "trtype": "TCP", 00:23:17.752 "adrfam": "IPv4", 00:23:17.752 "traddr": "10.0.0.2", 00:23:17.752 "trsvcid": "4420" 00:23:17.752 }, 00:23:17.752 "peer_address": { 00:23:17.752 "trtype": "TCP", 00:23:17.752 "adrfam": "IPv4", 00:23:17.752 "traddr": "10.0.0.1", 00:23:17.752 "trsvcid": "46176" 00:23:17.752 }, 00:23:17.752 "auth": { 00:23:17.752 "state": "completed", 00:23:17.752 "digest": "sha512", 00:23:17.752 "dhgroup": "ffdhe6144" 00:23:17.752 } 00:23:17.752 } 00:23:17.752 ]' 00:23:17.752 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:17.752 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:17.752 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:18.012 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:18.012 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:18.012 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:18.012 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:18.012 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:18.272 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:23:18.272 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:23:18.842 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.842 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:18.842 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.842 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.842 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.842 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:18.842 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:18.842 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:19.102 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:23:19.102 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:19.102 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:19.102 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:19.102 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:19.102 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:19.102 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.102 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.102 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.102 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.102 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.102 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.102 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.362 00:23:19.362 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:19.362 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:19.362 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.622 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.622 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:19.622 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.622 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.622 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.622 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:19.622 { 00:23:19.622 "cntlid": 131, 00:23:19.622 "qid": 0, 00:23:19.622 "state": "enabled", 00:23:19.622 "thread": "nvmf_tgt_poll_group_000", 00:23:19.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:19.622 "listen_address": { 00:23:19.622 "trtype": "TCP", 00:23:19.622 "adrfam": "IPv4", 00:23:19.622 "traddr": "10.0.0.2", 00:23:19.622 "trsvcid": "4420" 00:23:19.622 }, 00:23:19.622 "peer_address": { 00:23:19.622 "trtype": "TCP", 00:23:19.622 "adrfam": "IPv4", 00:23:19.622 "traddr": "10.0.0.1", 00:23:19.622 "trsvcid": "46200" 00:23:19.622 }, 00:23:19.622 "auth": { 00:23:19.622 "state": "completed", 00:23:19.622 "digest": "sha512", 00:23:19.622 "dhgroup": "ffdhe6144" 00:23:19.622 } 00:23:19.622 } 00:23:19.622 ]' 00:23:19.622 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:19.622 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:19.622 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:19.622 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:19.622 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:19.622 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.622 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.622 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.882 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:23:19.882 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:23:20.500 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.500 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:20.500 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.500 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.500 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.500 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:20.500 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:20.500 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:20.834 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:23:20.834 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:20.834 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:20.834 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:20.834 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:20.834 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:20.834 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.834 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.834 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.834 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.834 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.834 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.834 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.100 00:23:21.100 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:21.100 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:21.100 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:21.361 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.361 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:21.361 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.361 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.361 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.361 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:21.361 { 00:23:21.361 "cntlid": 133, 00:23:21.361 "qid": 0, 00:23:21.361 "state": "enabled", 00:23:21.361 "thread": "nvmf_tgt_poll_group_000", 00:23:21.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:21.361 "listen_address": { 00:23:21.361 "trtype": "TCP", 00:23:21.361 "adrfam": "IPv4", 00:23:21.361 "traddr": "10.0.0.2", 00:23:21.361 "trsvcid": "4420" 00:23:21.361 }, 00:23:21.361 "peer_address": { 00:23:21.361 "trtype": "TCP", 00:23:21.361 "adrfam": "IPv4", 00:23:21.361 "traddr": "10.0.0.1", 00:23:21.361 "trsvcid": "46224" 00:23:21.361 }, 00:23:21.361 "auth": { 00:23:21.361 "state": "completed", 00:23:21.361 "digest": "sha512", 00:23:21.361 "dhgroup": "ffdhe6144" 00:23:21.361 } 00:23:21.361 } 00:23:21.361 ]' 00:23:21.361 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:21.361 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:21.361 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:21.361 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:21.361 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:21.361 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:21.361 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:21.361 14:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:21.622 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:23:21.622 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:23:22.191 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:22.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:22.191 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:22.192 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.192 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.192 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.192 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:22.192 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:22.192 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:22.452 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:23:22.452 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:22.452 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:22.452 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:22.452 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:22.452 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:22.452 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:22.452 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.452 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.452 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.452 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:22.452 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:22.452 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:22.712 00:23:22.712 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:22.712 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:22.712 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.971 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.971 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.971 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.971 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.971 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.971 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:22.971 { 00:23:22.971 "cntlid": 135, 00:23:22.971 "qid": 0, 00:23:22.971 "state": "enabled", 00:23:22.971 "thread": "nvmf_tgt_poll_group_000", 00:23:22.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:22.971 "listen_address": { 00:23:22.971 "trtype": "TCP", 00:23:22.971 "adrfam": "IPv4", 00:23:22.971 "traddr": "10.0.0.2", 00:23:22.971 "trsvcid": "4420" 00:23:22.971 }, 00:23:22.971 "peer_address": { 00:23:22.971 "trtype": "TCP", 00:23:22.971 "adrfam": "IPv4", 00:23:22.971 "traddr": "10.0.0.1", 00:23:22.971 "trsvcid": "46252" 00:23:22.971 }, 00:23:22.971 "auth": { 00:23:22.971 "state": "completed", 00:23:22.971 "digest": "sha512", 00:23:22.971 "dhgroup": "ffdhe6144" 00:23:22.971 } 00:23:22.971 } 00:23:22.971 ]' 00:23:22.971 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:22.971 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:22.971 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:22.971 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:22.971 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:23.231 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:23.231 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:23.231 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:23.231 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:23:23.231 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:23:23.801 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:24.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.061 14:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.635 00:23:24.635 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:24.635 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:24.635 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.896 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.896 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:24.896 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.896 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.896 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.896 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:24.896 { 00:23:24.896 "cntlid": 137, 00:23:24.896 "qid": 0, 00:23:24.896 "state": "enabled", 00:23:24.896 "thread": "nvmf_tgt_poll_group_000", 00:23:24.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:24.896 "listen_address": { 00:23:24.896 "trtype": "TCP", 00:23:24.896 "adrfam": "IPv4", 00:23:24.896 "traddr": "10.0.0.2", 00:23:24.896 "trsvcid": "4420" 00:23:24.896 }, 00:23:24.896 "peer_address": { 00:23:24.896 "trtype": "TCP", 00:23:24.896 "adrfam": "IPv4", 00:23:24.896 "traddr": "10.0.0.1", 00:23:24.896 "trsvcid": "46290" 00:23:24.896 }, 00:23:24.896 "auth": { 00:23:24.896 "state": "completed", 00:23:24.896 "digest": "sha512", 00:23:24.896 "dhgroup": "ffdhe8192" 00:23:24.896 } 00:23:24.896 } 00:23:24.896 ]' 00:23:24.896 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:24.896 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:24.896 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:24.896 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:24.896 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:24.896 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:24.896 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:24.896 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:25.157 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:23:25.157 14:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:23:25.727 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:25.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:25.727 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:25.727 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.727 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.727 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.727 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:25.727 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:25.727 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:25.987 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:23:25.987 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:25.987 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:25.987 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:25.987 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:25.987 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:25.987 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.987 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.987 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.987 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.987 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.987 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.987 14:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:26.559 00:23:26.559 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:26.559 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:26.559 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.819 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.819 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:26.819 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.819 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.819 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.819 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:26.819 { 00:23:26.819 "cntlid": 139, 00:23:26.819 "qid": 0, 00:23:26.819 "state": "enabled", 00:23:26.819 "thread": "nvmf_tgt_poll_group_000", 00:23:26.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:26.819 "listen_address": { 00:23:26.819 "trtype": "TCP", 00:23:26.819 "adrfam": "IPv4", 00:23:26.819 "traddr": "10.0.0.2", 00:23:26.819 "trsvcid": "4420" 00:23:26.819 }, 00:23:26.819 "peer_address": { 00:23:26.819 "trtype": "TCP", 00:23:26.819 "adrfam": "IPv4", 00:23:26.819 "traddr": "10.0.0.1", 00:23:26.819 "trsvcid": "46306" 00:23:26.819 }, 00:23:26.819 "auth": { 00:23:26.819 "state": "completed", 00:23:26.819 "digest": "sha512", 00:23:26.819 "dhgroup": "ffdhe8192" 00:23:26.819 } 00:23:26.819 } 00:23:26.819 ]' 00:23:26.819 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:26.819 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:26.819 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:26.819 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:26.819 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:26.819 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:26.819 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:26.819 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:27.080 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:23:27.080 14:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: --dhchap-ctrl-secret DHHC-1:02:ZTBmYTU2MmZmMTlmMjgzM2RjOTdkNDBiMjc1NGUwZGIxZjg0OThiYjE1MjU1NmVmNYBSaw==: 00:23:27.651 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:27.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:27.651 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:27.651 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.651 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.651 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.651 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:27.651 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:27.651 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:27.912 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:23:27.912 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:27.912 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:27.912 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:27.912 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:27.912 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:27.912 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.912 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.912 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.912 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.912 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.912 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.912 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.483 00:23:28.483 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:28.483 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:28.483 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:28.483 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.483 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:28.483 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.483 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.483 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.483 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:28.483 { 00:23:28.483 "cntlid": 141, 00:23:28.483 "qid": 0, 00:23:28.483 "state": "enabled", 00:23:28.483 "thread": "nvmf_tgt_poll_group_000", 00:23:28.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:28.483 "listen_address": { 00:23:28.483 "trtype": "TCP", 00:23:28.483 "adrfam": "IPv4", 00:23:28.483 "traddr": "10.0.0.2", 00:23:28.483 "trsvcid": "4420" 00:23:28.483 }, 00:23:28.483 "peer_address": { 00:23:28.483 "trtype": "TCP", 00:23:28.483 "adrfam": "IPv4", 00:23:28.483 "traddr": "10.0.0.1", 00:23:28.483 "trsvcid": "58124" 00:23:28.483 }, 00:23:28.483 "auth": { 00:23:28.483 "state": "completed", 00:23:28.483 "digest": "sha512", 00:23:28.483 "dhgroup": "ffdhe8192" 00:23:28.483 } 00:23:28.483 } 00:23:28.483 ]' 00:23:28.483 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:28.483 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:28.483 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:28.744 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:28.744 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:28.744 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:28.744 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:28.744 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:28.744 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:23:28.744 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:01:ZDA5ZDQ0YzVlMGQzZjBmODljOGI1NjFhOTkzNDUyMzY3bc2K: 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:29.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:29.683 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:30.263 00:23:30.263 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:30.263 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:30.263 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:30.263 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.263 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:30.263 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.263 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.263 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.263 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:30.263 { 00:23:30.263 "cntlid": 143, 00:23:30.263 "qid": 0, 00:23:30.263 "state": "enabled", 00:23:30.263 "thread": "nvmf_tgt_poll_group_000", 00:23:30.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:30.263 "listen_address": { 00:23:30.263 "trtype": "TCP", 00:23:30.263 "adrfam": "IPv4", 00:23:30.263 "traddr": "10.0.0.2", 00:23:30.263 "trsvcid": "4420" 00:23:30.263 }, 00:23:30.263 "peer_address": { 00:23:30.263 "trtype": "TCP", 00:23:30.263 "adrfam": "IPv4", 00:23:30.263 "traddr": "10.0.0.1", 00:23:30.263 "trsvcid": "58146" 00:23:30.263 }, 00:23:30.263 "auth": { 00:23:30.263 "state": "completed", 00:23:30.263 "digest": "sha512", 00:23:30.263 "dhgroup": "ffdhe8192" 00:23:30.263 } 00:23:30.263 } 00:23:30.263 ]' 00:23:30.263 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:30.523 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:30.523 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:30.523 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:30.523 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:30.523 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:30.523 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:30.523 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.783 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:23:30.783 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:23:31.352 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:31.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:31.352 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:31.352 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.352 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.352 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.352 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:31.352 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:23:31.352 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:31.352 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:31.352 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:31.352 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:31.612 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:23:31.612 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:31.612 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:31.612 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:31.612 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:31.612 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:31.612 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.612 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.612 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.612 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.612 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.612 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.612 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.872 00:23:32.141 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:32.141 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:32.141 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.141 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.141 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:32.141 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.141 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.141 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.141 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:32.141 { 00:23:32.141 "cntlid": 145, 00:23:32.141 "qid": 0, 00:23:32.141 "state": "enabled", 00:23:32.141 "thread": "nvmf_tgt_poll_group_000", 00:23:32.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:32.141 "listen_address": { 00:23:32.141 "trtype": "TCP", 00:23:32.141 "adrfam": "IPv4", 00:23:32.141 "traddr": "10.0.0.2", 00:23:32.141 "trsvcid": "4420" 00:23:32.141 }, 00:23:32.141 "peer_address": { 00:23:32.141 "trtype": "TCP", 00:23:32.141 "adrfam": "IPv4", 00:23:32.141 "traddr": "10.0.0.1", 00:23:32.141 "trsvcid": "58178" 00:23:32.141 }, 00:23:32.141 "auth": { 00:23:32.141 "state": "completed", 00:23:32.141 "digest": "sha512", 00:23:32.141 "dhgroup": "ffdhe8192" 00:23:32.141 } 00:23:32.141 } 00:23:32.141 ]' 00:23:32.141 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:32.141 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:32.141 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:32.401 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:32.401 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:32.401 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:32.401 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:32.401 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:32.401 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:23:32.401 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZTEyNzY2OTVkYTBmOWNiOGI0NjljOWRiNWNlMmQ2MTU4MjY2MzUzOGNiMTExNWI1kvMjyQ==: --dhchap-ctrl-secret DHHC-1:03:ODZlMzhhYTk0NGY5Y2ExOTE1NTYyZTFlY2FiNzZjODk2MDY0NzM2YjJhMGU2Y2Y4ZTc3MGFiM2JjMzY1ZDM1YnxPk+k=: 00:23:33.341 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:33.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:33.341 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:33.341 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.341 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.341 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.341 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:23:33.341 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.341 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.341 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.341 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:23:33.341 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:33.341 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:23:33.341 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:33.341 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.341 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:33.341 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.341 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:23:33.341 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:33.341 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:33.602 request: 00:23:33.602 { 00:23:33.602 "name": "nvme0", 00:23:33.602 "trtype": "tcp", 00:23:33.602 "traddr": "10.0.0.2", 00:23:33.602 "adrfam": "ipv4", 00:23:33.602 "trsvcid": "4420", 00:23:33.602 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:33.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:33.602 "prchk_reftag": false, 00:23:33.602 "prchk_guard": false, 00:23:33.602 "hdgst": false, 00:23:33.602 "ddgst": false, 00:23:33.602 "dhchap_key": "key2", 00:23:33.602 "allow_unrecognized_csi": false, 00:23:33.602 "method": "bdev_nvme_attach_controller", 00:23:33.602 "req_id": 1 00:23:33.602 } 00:23:33.602 Got JSON-RPC error response 00:23:33.602 response: 00:23:33.602 { 00:23:33.602 "code": -5, 00:23:33.602 "message": "Input/output error" 00:23:33.602 } 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:33.602 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:34.173 request: 00:23:34.173 { 00:23:34.173 "name": "nvme0", 00:23:34.173 "trtype": "tcp", 00:23:34.173 "traddr": "10.0.0.2", 00:23:34.173 "adrfam": "ipv4", 00:23:34.173 "trsvcid": "4420", 00:23:34.173 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:34.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:34.173 "prchk_reftag": false, 00:23:34.173 "prchk_guard": false, 00:23:34.173 "hdgst": false, 00:23:34.173 "ddgst": false, 00:23:34.173 "dhchap_key": "key1", 00:23:34.173 "dhchap_ctrlr_key": "ckey2", 00:23:34.173 "allow_unrecognized_csi": false, 00:23:34.173 "method": "bdev_nvme_attach_controller", 00:23:34.173 "req_id": 1 00:23:34.173 } 00:23:34.173 Got JSON-RPC error response 00:23:34.173 response: 00:23:34.173 { 00:23:34.173 "code": -5, 00:23:34.173 "message": "Input/output error" 00:23:34.173 } 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.173 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.446 request: 00:23:34.446 { 00:23:34.446 "name": "nvme0", 00:23:34.446 "trtype": "tcp", 00:23:34.446 "traddr": "10.0.0.2", 00:23:34.446 "adrfam": "ipv4", 00:23:34.447 "trsvcid": "4420", 00:23:34.447 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:34.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:34.447 "prchk_reftag": false, 00:23:34.447 "prchk_guard": false, 00:23:34.447 "hdgst": false, 00:23:34.447 "ddgst": false, 00:23:34.447 "dhchap_key": "key1", 00:23:34.447 "dhchap_ctrlr_key": "ckey1", 00:23:34.447 "allow_unrecognized_csi": false, 00:23:34.447 "method": "bdev_nvme_attach_controller", 00:23:34.447 "req_id": 1 00:23:34.447 } 00:23:34.447 Got JSON-RPC error response 00:23:34.447 response: 00:23:34.447 { 00:23:34.447 "code": -5, 00:23:34.447 "message": "Input/output error" 00:23:34.447 } 00:23:34.447 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:34.448 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:34.448 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:34.448 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:34.448 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:34.448 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.448 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.448 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.448 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1714058 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1714058 ']' 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1714058 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1714058 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1714058' 00:23:34.712 killing process with pid 1714058 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1714058 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1714058 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1740308 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1740308 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1740308 ']' 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:34.712 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.653 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:35.653 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:23:35.653 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:35.653 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:35.653 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.653 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.653 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:35.653 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1740308 00:23:35.653 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1740308 ']' 00:23:35.653 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.653 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:35.653 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.653 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:35.653 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.914 null0 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.IFd 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Rhd ]] 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Rhd 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.XyX 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.zWV ]] 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zWV 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.kMy 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.nqi ]] 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nqi 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.DcF 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:35.914 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:36.854 nvme0n1 00:23:36.854 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:36.854 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:36.854 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.854 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.854 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.854 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.854 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.854 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.854 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:36.854 { 00:23:36.854 "cntlid": 1, 00:23:36.855 "qid": 0, 00:23:36.855 "state": "enabled", 00:23:36.855 "thread": "nvmf_tgt_poll_group_000", 00:23:36.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:36.855 "listen_address": { 00:23:36.855 "trtype": "TCP", 00:23:36.855 "adrfam": "IPv4", 00:23:36.855 "traddr": "10.0.0.2", 00:23:36.855 "trsvcid": "4420" 00:23:36.855 }, 00:23:36.855 "peer_address": { 00:23:36.855 "trtype": "TCP", 00:23:36.855 "adrfam": "IPv4", 00:23:36.855 "traddr": "10.0.0.1", 00:23:36.855 "trsvcid": "58208" 00:23:36.855 }, 00:23:36.855 "auth": { 00:23:36.855 "state": "completed", 00:23:36.855 "digest": "sha512", 00:23:36.855 "dhgroup": "ffdhe8192" 00:23:36.855 } 00:23:36.855 } 00:23:36.855 ]' 00:23:36.855 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:37.115 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:37.115 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:37.115 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:37.115 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:37.115 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:37.115 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:37.115 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:37.375 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:23:37.375 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:23:37.944 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:37.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:37.944 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:37.944 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.944 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.944 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.944 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:37.944 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.944 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.944 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.944 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:37.944 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:38.204 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:38.204 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:38.204 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:38.204 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:38.204 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:38.204 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:38.204 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:38.204 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:38.204 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:38.204 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:38.204 request: 00:23:38.204 { 00:23:38.204 "name": "nvme0", 00:23:38.204 "trtype": "tcp", 00:23:38.204 "traddr": "10.0.0.2", 00:23:38.204 "adrfam": "ipv4", 00:23:38.204 "trsvcid": "4420", 00:23:38.204 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:38.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:38.204 "prchk_reftag": false, 00:23:38.204 "prchk_guard": false, 00:23:38.204 "hdgst": false, 00:23:38.204 "ddgst": false, 00:23:38.204 "dhchap_key": "key3", 00:23:38.204 "allow_unrecognized_csi": false, 00:23:38.204 "method": "bdev_nvme_attach_controller", 00:23:38.204 "req_id": 1 00:23:38.204 } 00:23:38.204 Got JSON-RPC error response 00:23:38.204 response: 00:23:38.204 { 00:23:38.204 "code": -5, 00:23:38.204 "message": "Input/output error" 00:23:38.204 } 00:23:38.466 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:38.466 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:38.466 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:38.466 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:38.466 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:23:38.466 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:23:38.466 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:38.466 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:38.466 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:38.466 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:38.466 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:38.466 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:38.466 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:38.466 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:38.466 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:38.466 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:38.466 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:38.466 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:38.726 request: 00:23:38.726 { 00:23:38.726 "name": "nvme0", 00:23:38.726 "trtype": "tcp", 00:23:38.726 "traddr": "10.0.0.2", 00:23:38.726 "adrfam": "ipv4", 00:23:38.726 "trsvcid": "4420", 00:23:38.726 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:38.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:38.726 "prchk_reftag": false, 00:23:38.726 "prchk_guard": false, 00:23:38.726 "hdgst": false, 00:23:38.726 "ddgst": false, 00:23:38.726 "dhchap_key": "key3", 00:23:38.726 "allow_unrecognized_csi": false, 00:23:38.726 "method": "bdev_nvme_attach_controller", 00:23:38.726 "req_id": 1 00:23:38.726 } 00:23:38.726 Got JSON-RPC error response 00:23:38.726 response: 00:23:38.726 { 00:23:38.726 "code": -5, 00:23:38.726 "message": "Input/output error" 00:23:38.726 } 00:23:38.726 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:38.726 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:38.726 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:38.726 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:38.726 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:38.726 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:23:38.726 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:38.726 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:38.726 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:38.726 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:38.987 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:38.987 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.987 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.987 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.987 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:38.987 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.987 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.987 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.987 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:38.987 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:38.987 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:38.987 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:38.987 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:38.987 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:38.987 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:38.987 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:38.987 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:38.987 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:39.247 request: 00:23:39.247 { 00:23:39.247 "name": "nvme0", 00:23:39.247 "trtype": "tcp", 00:23:39.247 "traddr": "10.0.0.2", 00:23:39.247 "adrfam": "ipv4", 00:23:39.247 "trsvcid": "4420", 00:23:39.247 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:39.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:39.247 "prchk_reftag": false, 00:23:39.247 "prchk_guard": false, 00:23:39.247 "hdgst": false, 00:23:39.247 "ddgst": false, 00:23:39.247 "dhchap_key": "key0", 00:23:39.247 "dhchap_ctrlr_key": "key1", 00:23:39.247 "allow_unrecognized_csi": false, 00:23:39.247 "method": "bdev_nvme_attach_controller", 00:23:39.247 "req_id": 1 00:23:39.247 } 00:23:39.247 Got JSON-RPC error response 00:23:39.247 response: 00:23:39.247 { 00:23:39.247 "code": -5, 00:23:39.247 "message": "Input/output error" 00:23:39.247 } 00:23:39.247 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:39.247 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:39.247 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:39.247 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:39.247 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:23:39.247 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:39.247 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:39.508 nvme0n1 00:23:39.508 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:23:39.508 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:23:39.508 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:39.767 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.767 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:39.767 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:39.767 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:23:39.767 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.767 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.027 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.027 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:40.027 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:40.027 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:40.596 nvme0n1 00:23:40.596 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:23:40.596 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:23:40.596 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:40.858 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.858 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:40.858 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.858 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.858 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.858 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:23:40.858 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:23:40.858 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:41.118 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.118 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:23:41.118 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: --dhchap-ctrl-secret DHHC-1:03:NTY3NzIwZmZhYjQ0YzRlMDA3ZGJiMDQ4YzUxODg5N2I4NGI1MmVlZjYwODNkOGUwZmU5ZDVhZmRmNWVhM2E5YuM6to8=: 00:23:41.688 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:23:41.688 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:23:41.688 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:23:41.688 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:23:41.688 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:23:41.688 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:23:41.688 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:23:41.688 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:41.688 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:41.948 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:23:41.948 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:41.948 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:23:41.948 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:41.948 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:41.948 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:41.948 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:41.948 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:41.948 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:41.948 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:42.208 request: 00:23:42.208 { 00:23:42.208 "name": "nvme0", 00:23:42.208 "trtype": "tcp", 00:23:42.208 "traddr": "10.0.0.2", 00:23:42.208 "adrfam": "ipv4", 00:23:42.208 "trsvcid": "4420", 00:23:42.208 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:42.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:42.208 "prchk_reftag": false, 00:23:42.208 "prchk_guard": false, 00:23:42.208 "hdgst": false, 00:23:42.208 "ddgst": false, 00:23:42.208 "dhchap_key": "key1", 00:23:42.208 "allow_unrecognized_csi": false, 00:23:42.208 "method": "bdev_nvme_attach_controller", 00:23:42.208 "req_id": 1 00:23:42.208 } 00:23:42.208 Got JSON-RPC error response 00:23:42.208 response: 00:23:42.208 { 00:23:42.208 "code": -5, 00:23:42.208 "message": "Input/output error" 00:23:42.208 } 00:23:42.208 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:42.208 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:42.208 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:42.208 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:42.208 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:42.208 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:42.208 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:43.148 nvme0n1 00:23:43.148 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:23:43.148 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:23:43.148 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:43.148 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.148 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:43.148 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:43.408 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:43.408 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.408 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.408 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.408 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:23:43.408 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:43.408 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:43.668 nvme0n1 00:23:43.668 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:23:43.668 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:23:43.668 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:43.668 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.668 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:43.668 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:43.928 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:43.928 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.928 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.928 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.928 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: '' 2s 00:23:43.928 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:43.928 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:43.928 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: 00:23:43.929 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:23:43.929 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:43.929 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:43.929 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: ]] 00:23:43.929 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MmFlYmVhZGNmZTQxNGNmMzAzNjJhMDMxYmI4MjI4NDNS3Vfk: 00:23:43.929 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:23:43.929 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:43.929 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:46.470 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:46.470 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:23:46.470 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:23:46.470 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:23:46.470 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:23:46.470 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:23:46.470 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:23:46.470 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:46.470 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.470 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.471 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.471 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: 2s 00:23:46.471 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:46.471 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:46.471 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:46.471 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: 00:23:46.471 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:46.471 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:46.471 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:46.471 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: ]] 00:23:46.471 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Y2MyZDcxMzYyOThiNjUzNjgwNzk0OGFmZDgwNzZjNmY5YThlMGZjYWQxZWJmYzM4QBhMwA==: 00:23:46.471 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:46.471 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:48.380 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:48.380 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:23:48.380 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:23:48.380 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:23:48.380 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:23:48.380 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:23:48.380 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:23:48.380 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:48.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:48.380 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:48.380 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.380 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.380 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.380 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:48.380 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:48.380 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:48.950 nvme0n1 00:23:48.950 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:48.950 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.950 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.950 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.950 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:48.950 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:49.210 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:49.210 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:49.210 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:49.470 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.470 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:49.470 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.470 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.470 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.470 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:49.470 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:49.730 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:49.730 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:49.730 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:49.730 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.730 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:49.730 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.990 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.990 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.990 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:49.990 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:49.990 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:49.990 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:49.990 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:49.990 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:49.990 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:49.990 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:49.990 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:50.250 request: 00:23:50.250 { 00:23:50.250 "name": "nvme0", 00:23:50.250 "dhchap_key": "key1", 00:23:50.250 "dhchap_ctrlr_key": "key3", 00:23:50.250 "method": "bdev_nvme_set_keys", 00:23:50.250 "req_id": 1 00:23:50.250 } 00:23:50.250 Got JSON-RPC error response 00:23:50.250 response: 00:23:50.250 { 00:23:50.250 "code": -13, 00:23:50.250 "message": "Permission denied" 00:23:50.250 } 00:23:50.250 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:50.250 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:50.250 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:50.250 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:50.250 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:50.250 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:50.250 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:50.511 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:50.511 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:51.451 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:51.451 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:51.451 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:51.711 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:51.711 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:51.711 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.711 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.711 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.711 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:51.711 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:51.711 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:52.280 nvme0n1 00:23:52.540 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:52.540 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.540 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.540 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.540 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:52.540 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:52.540 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:52.540 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:52.540 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.540 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:52.540 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.540 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:52.540 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:52.801 request: 00:23:52.801 { 00:23:52.801 "name": "nvme0", 00:23:52.801 "dhchap_key": "key2", 00:23:52.801 "dhchap_ctrlr_key": "key0", 00:23:52.801 "method": "bdev_nvme_set_keys", 00:23:52.801 "req_id": 1 00:23:52.801 } 00:23:52.801 Got JSON-RPC error response 00:23:52.801 response: 00:23:52.801 { 00:23:52.801 "code": -13, 00:23:52.801 "message": "Permission denied" 00:23:52.801 } 00:23:52.801 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:52.801 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:52.801 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:52.801 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:52.801 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:52.801 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:52.801 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:53.062 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:53.062 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:54.003 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:54.003 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:54.003 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:54.262 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:54.262 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:54.262 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:54.262 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1714212 00:23:54.262 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1714212 ']' 00:23:54.262 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1714212 00:23:54.262 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:54.262 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:54.262 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1714212 00:23:54.262 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:54.262 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:54.262 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1714212' 00:23:54.262 killing process with pid 1714212 00:23:54.262 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1714212 00:23:54.262 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1714212 00:23:54.522 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:54.522 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:54.522 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:54.522 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:54.522 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:54.522 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:54.522 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:54.522 rmmod nvme_tcp 00:23:54.522 rmmod nvme_fabrics 00:23:54.522 rmmod nvme_keyring 00:23:54.522 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:54.522 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:54.522 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:54.522 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 1740308 ']' 00:23:54.522 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 1740308 00:23:54.522 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1740308 ']' 00:23:54.522 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1740308 00:23:54.522 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:54.522 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:54.522 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1740308 00:23:54.783 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:54.783 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:54.783 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1740308' 00:23:54.783 killing process with pid 1740308 00:23:54.783 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1740308 00:23:54.783 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1740308 00:23:54.783 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:54.783 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:54.783 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:54.783 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:54.783 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:23:54.783 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:54.783 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:23:54.783 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:54.783 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:54.783 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.783 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.783 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.697 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:56.958 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.IFd /tmp/spdk.key-sha256.XyX /tmp/spdk.key-sha384.kMy /tmp/spdk.key-sha512.DcF /tmp/spdk.key-sha512.Rhd /tmp/spdk.key-sha384.zWV /tmp/spdk.key-sha256.nqi '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:56.958 00:23:56.958 real 2m37.357s 00:23:56.958 user 5m52.867s 00:23:56.958 sys 0m25.068s 00:23:56.958 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:56.958 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.958 ************************************ 00:23:56.958 END TEST nvmf_auth_target 00:23:56.958 ************************************ 00:23:56.958 14:20:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:56.958 14:20:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:56.958 14:20:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:56.959 14:20:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:56.959 14:20:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:56.959 ************************************ 00:23:56.959 START TEST nvmf_bdevio_no_huge 00:23:56.959 ************************************ 00:23:56.959 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:56.959 * Looking for test storage... 00:23:56.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:56.959 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:56.959 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:23:56.959 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:57.220 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:57.220 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:57.220 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:57.220 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:57.220 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:57.220 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:57.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.221 --rc genhtml_branch_coverage=1 00:23:57.221 --rc genhtml_function_coverage=1 00:23:57.221 --rc genhtml_legend=1 00:23:57.221 --rc geninfo_all_blocks=1 00:23:57.221 --rc geninfo_unexecuted_blocks=1 00:23:57.221 00:23:57.221 ' 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:57.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.221 --rc genhtml_branch_coverage=1 00:23:57.221 --rc genhtml_function_coverage=1 00:23:57.221 --rc genhtml_legend=1 00:23:57.221 --rc geninfo_all_blocks=1 00:23:57.221 --rc geninfo_unexecuted_blocks=1 00:23:57.221 00:23:57.221 ' 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:57.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.221 --rc genhtml_branch_coverage=1 00:23:57.221 --rc genhtml_function_coverage=1 00:23:57.221 --rc genhtml_legend=1 00:23:57.221 --rc geninfo_all_blocks=1 00:23:57.221 --rc geninfo_unexecuted_blocks=1 00:23:57.221 00:23:57.221 ' 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:57.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.221 --rc genhtml_branch_coverage=1 00:23:57.221 --rc genhtml_function_coverage=1 00:23:57.221 --rc genhtml_legend=1 00:23:57.221 --rc geninfo_all_blocks=1 00:23:57.221 --rc geninfo_unexecuted_blocks=1 00:23:57.221 00:23:57.221 ' 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:57.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.221 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.222 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:57.222 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:57.222 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:57.222 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:05.365 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:05.365 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:05.365 Found net devices under 0000:31:00.0: cvl_0_0 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:05.365 Found net devices under 0000:31:00.1: cvl_0_1 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:05.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:24:05.365 00:24:05.365 --- 10.0.0.2 ping statistics --- 00:24:05.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.365 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:24:05.365 00:24:05.365 --- 10.0.0.1 ping statistics --- 00:24:05.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.365 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=1748527 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 1748527 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1748527 ']' 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:05.365 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:05.366 [2024-10-13 14:20:08.519265] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:24:05.366 [2024-10-13 14:20:08.519332] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:24:05.366 [2024-10-13 14:20:08.675453] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:05.366 [2024-10-13 14:20:08.712322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:05.366 [2024-10-13 14:20:08.757523] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.366 [2024-10-13 14:20:08.757559] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.366 [2024-10-13 14:20:08.757568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.366 [2024-10-13 14:20:08.757575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.366 [2024-10-13 14:20:08.757581] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.366 [2024-10-13 14:20:08.759132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:05.366 [2024-10-13 14:20:08.759343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:05.366 [2024-10-13 14:20:08.759343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:05.366 [2024-10-13 14:20:08.759183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:05.939 [2024-10-13 14:20:09.396254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:05.939 Malloc0 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:05.939 [2024-10-13 14:20:09.449977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:05.939 { 00:24:05.939 "params": { 00:24:05.939 "name": "Nvme$subsystem", 00:24:05.939 "trtype": "$TEST_TRANSPORT", 00:24:05.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.939 "adrfam": "ipv4", 00:24:05.939 "trsvcid": "$NVMF_PORT", 00:24:05.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.939 "hdgst": ${hdgst:-false}, 00:24:05.939 "ddgst": ${ddgst:-false} 00:24:05.939 }, 00:24:05.939 "method": "bdev_nvme_attach_controller" 00:24:05.939 } 00:24:05.939 EOF 00:24:05.939 )") 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:24:05.939 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:24:05.939 "params": { 00:24:05.939 "name": "Nvme1", 00:24:05.939 "trtype": "tcp", 00:24:05.939 "traddr": "10.0.0.2", 00:24:05.939 "adrfam": "ipv4", 00:24:05.939 "trsvcid": "4420", 00:24:05.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.939 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:05.939 "hdgst": false, 00:24:05.939 "ddgst": false 00:24:05.939 }, 00:24:05.939 "method": "bdev_nvme_attach_controller" 00:24:05.939 }' 00:24:05.939 [2024-10-13 14:20:09.505576] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:24:05.939 [2024-10-13 14:20:09.505648] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1748766 ] 00:24:06.200 [2024-10-13 14:20:09.652177] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:06.200 [2024-10-13 14:20:09.691819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:06.200 [2024-10-13 14:20:09.738370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.200 [2024-10-13 14:20:09.738515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.200 [2024-10-13 14:20:09.738515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.461 I/O targets: 00:24:06.461 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:06.461 00:24:06.461 00:24:06.461 CUnit - A unit testing framework for C - Version 2.1-3 00:24:06.461 http://cunit.sourceforge.net/ 00:24:06.461 00:24:06.461 00:24:06.461 Suite: bdevio tests on: Nvme1n1 00:24:06.461 Test: blockdev write read block ...passed 00:24:06.461 Test: blockdev write zeroes read block ...passed 00:24:06.461 Test: blockdev write zeroes read no split ...passed 00:24:06.461 Test: blockdev write zeroes read split ...passed 00:24:06.461 Test: blockdev write zeroes read split partial ...passed 00:24:06.461 Test: blockdev reset ...[2024-10-13 14:20:10.142823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.461 [2024-10-13 14:20:10.142947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26f8410 (9): Bad file descriptor 00:24:06.461 [2024-10-13 14:20:10.158104] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:06.461 passed 00:24:06.461 Test: blockdev write read 8 blocks ...passed 00:24:06.729 Test: blockdev write read size > 128k ...passed 00:24:06.729 Test: blockdev write read invalid size ...passed 00:24:06.729 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:06.729 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:06.729 Test: blockdev write read max offset ...passed 00:24:06.729 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:06.729 Test: blockdev writev readv 8 blocks ...passed 00:24:06.729 Test: blockdev writev readv 30 x 1block ...passed 00:24:06.729 Test: blockdev writev readv block ...passed 00:24:06.729 Test: blockdev writev readv size > 128k ...passed 00:24:06.729 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:06.729 Test: blockdev comparev and writev ...[2024-10-13 14:20:10.382500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:06.729 [2024-10-13 14:20:10.382559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.729 [2024-10-13 14:20:10.382577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:06.729 [2024-10-13 14:20:10.382586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:06.729 [2024-10-13 14:20:10.383046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:06.729 [2024-10-13 14:20:10.383059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:06.729 [2024-10-13 14:20:10.383078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:06.729 [2024-10-13 14:20:10.383086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:06.729 [2024-10-13 14:20:10.383658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:06.729 [2024-10-13 14:20:10.383669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:06.729 [2024-10-13 14:20:10.383683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:06.729 [2024-10-13 14:20:10.383691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:06.729 [2024-10-13 14:20:10.384280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:06.729 [2024-10-13 14:20:10.384292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:06.729 [2024-10-13 14:20:10.384307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:06.729 [2024-10-13 14:20:10.384315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:06.729 passed 00:24:07.021 Test: blockdev nvme passthru rw ...passed 00:24:07.021 Test: blockdev nvme passthru vendor specific ...[2024-10-13 14:20:10.468977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:07.021 [2024-10-13 14:20:10.468994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:07.021 [2024-10-13 14:20:10.469365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:07.021 [2024-10-13 14:20:10.469376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:07.021 [2024-10-13 14:20:10.469787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:07.021 [2024-10-13 14:20:10.469799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:07.021 [2024-10-13 14:20:10.470194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:07.021 [2024-10-13 14:20:10.470206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:07.021 passed 00:24:07.021 Test: blockdev nvme admin passthru ...passed 00:24:07.021 Test: blockdev copy ...passed 00:24:07.021 00:24:07.021 Run Summary: Type Total Ran Passed Failed Inactive 00:24:07.021 suites 1 1 n/a 0 0 00:24:07.021 tests 23 23 23 0 0 00:24:07.021 asserts 152 152 152 0 n/a 00:24:07.021 00:24:07.021 Elapsed time = 1.219 seconds 00:24:07.319 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:07.319 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.319 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:07.319 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.319 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:07.319 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:24:07.319 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:07.319 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:24:07.319 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:07.319 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:24:07.319 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:07.319 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:07.319 rmmod nvme_tcp 00:24:07.319 rmmod nvme_fabrics 00:24:07.319 rmmod nvme_keyring 00:24:07.319 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:07.319 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:24:07.319 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:24:07.320 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 1748527 ']' 00:24:07.320 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 1748527 00:24:07.320 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1748527 ']' 00:24:07.320 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1748527 00:24:07.320 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:24:07.320 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:07.320 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1748527 00:24:07.320 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:24:07.320 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:24:07.320 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1748527' 00:24:07.320 killing process with pid 1748527 00:24:07.320 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1748527 00:24:07.320 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1748527 00:24:07.916 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:07.916 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:07.916 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:07.916 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:24:07.916 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:24:07.916 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:24:07.916 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:07.916 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:07.916 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:07.916 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.916 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.916 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.830 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:09.830 00:24:09.830 real 0m12.942s 00:24:09.830 user 0m14.345s 00:24:09.830 sys 0m6.959s 00:24:09.830 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:09.830 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:09.830 ************************************ 00:24:09.830 END TEST nvmf_bdevio_no_huge 00:24:09.830 ************************************ 00:24:09.830 14:20:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:09.831 14:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:09.831 14:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:09.831 14:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:09.831 ************************************ 00:24:09.831 START TEST nvmf_tls 00:24:09.831 ************************************ 00:24:09.831 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:10.092 * Looking for test storage... 00:24:10.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:10.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.092 --rc genhtml_branch_coverage=1 00:24:10.092 --rc genhtml_function_coverage=1 00:24:10.092 --rc genhtml_legend=1 00:24:10.092 --rc geninfo_all_blocks=1 00:24:10.092 --rc geninfo_unexecuted_blocks=1 00:24:10.092 00:24:10.092 ' 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:10.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.092 --rc genhtml_branch_coverage=1 00:24:10.092 --rc genhtml_function_coverage=1 00:24:10.092 --rc genhtml_legend=1 00:24:10.092 --rc geninfo_all_blocks=1 00:24:10.092 --rc geninfo_unexecuted_blocks=1 00:24:10.092 00:24:10.092 ' 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:10.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.092 --rc genhtml_branch_coverage=1 00:24:10.092 --rc genhtml_function_coverage=1 00:24:10.092 --rc genhtml_legend=1 00:24:10.092 --rc geninfo_all_blocks=1 00:24:10.092 --rc geninfo_unexecuted_blocks=1 00:24:10.092 00:24:10.092 ' 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:10.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.092 --rc genhtml_branch_coverage=1 00:24:10.092 --rc genhtml_function_coverage=1 00:24:10.092 --rc genhtml_legend=1 00:24:10.092 --rc geninfo_all_blocks=1 00:24:10.092 --rc geninfo_unexecuted_blocks=1 00:24:10.092 00:24:10.092 ' 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:24:10.092 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:10.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:24:10.093 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:18.242 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:18.242 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:18.242 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:18.243 Found net devices under 0000:31:00.0: cvl_0_0 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:18.243 Found net devices under 0000:31:00.1: cvl_0_1 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:18.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:18.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:24:18.243 00:24:18.243 --- 10.0.0.2 ping statistics --- 00:24:18.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.243 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:18.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:18.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:24:18.243 00:24:18.243 --- 10.0.0.1 ping statistics --- 00:24:18.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.243 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1753299 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1753299 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1753299 ']' 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:18.243 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.243 [2024-10-13 14:20:21.566306] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:24:18.243 [2024-10-13 14:20:21.566374] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.243 [2024-10-13 14:20:21.712038] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:18.243 [2024-10-13 14:20:21.759349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.243 [2024-10-13 14:20:21.785464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.243 [2024-10-13 14:20:21.785506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.243 [2024-10-13 14:20:21.785515] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.243 [2024-10-13 14:20:21.785522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.243 [2024-10-13 14:20:21.785528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.243 [2024-10-13 14:20:21.786241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.815 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:18.815 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:18.815 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:18.815 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:18.815 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.815 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.815 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:24:18.815 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:19.076 true 00:24:19.076 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:24:19.076 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:19.336 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:24:19.337 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:24:19.337 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:19.337 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:19.337 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:24:19.597 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:24:19.597 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:24:19.597 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:19.858 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:19.858 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:24:19.858 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:24:19.858 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:24:19.858 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:19.858 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:24:20.119 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:24:20.119 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:24:20.119 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:20.380 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:20.380 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:24:20.380 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:24:20.380 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:24:20.380 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:20.641 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:20.641 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.yWKDShppKo 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.45VpdRmhe4 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.yWKDShppKo 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.45VpdRmhe4 00:24:20.902 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:21.163 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:24:21.423 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.yWKDShppKo 00:24:21.423 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yWKDShppKo 00:24:21.423 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:21.683 [2024-10-13 14:20:25.141156] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.683 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:21.683 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:21.944 [2024-10-13 14:20:25.465170] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:21.944 [2024-10-13 14:20:25.465368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.944 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:21.944 malloc0 00:24:21.944 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:22.204 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yWKDShppKo 00:24:22.464 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:22.464 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.yWKDShppKo 00:24:34.690 Initializing NVMe Controllers 00:24:34.690 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:34.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:34.690 Initialization complete. Launching workers. 00:24:34.690 ======================================================== 00:24:34.690 Latency(us) 00:24:34.690 Device Information : IOPS MiB/s Average min max 00:24:34.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18623.19 72.75 3436.77 1202.04 4367.35 00:24:34.690 ======================================================== 00:24:34.690 Total : 18623.19 72.75 3436.77 1202.04 4367.35 00:24:34.690 00:24:34.690 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yWKDShppKo 00:24:34.690 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:34.690 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:34.690 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:34.690 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yWKDShppKo 00:24:34.690 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:34.690 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1756296 00:24:34.690 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:34.690 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1756296 /var/tmp/bdevperf.sock 00:24:34.690 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1756296 ']' 00:24:34.690 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:34.690 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:34.690 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:34.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:34.690 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:34.690 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.690 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:34.690 [2024-10-13 14:20:36.386657] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:24:34.691 [2024-10-13 14:20:36.386710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756296 ] 00:24:34.691 [2024-10-13 14:20:36.517127] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:34.691 [2024-10-13 14:20:36.566233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.691 [2024-10-13 14:20:36.583923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.691 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:34.691 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:34.691 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yWKDShppKo 00:24:34.691 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:34.691 [2024-10-13 14:20:37.478854] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:34.691 TLSTESTn1 00:24:34.691 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:34.691 Running I/O for 10 seconds... 00:24:36.337 5110.00 IOPS, 19.96 MiB/s [2024-10-13T12:20:40.986Z] 5281.50 IOPS, 20.63 MiB/s [2024-10-13T12:20:41.929Z] 5050.67 IOPS, 19.73 MiB/s [2024-10-13T12:20:42.870Z] 5331.75 IOPS, 20.83 MiB/s [2024-10-13T12:20:43.810Z] 5551.60 IOPS, 21.69 MiB/s [2024-10-13T12:20:44.752Z] 5474.67 IOPS, 21.39 MiB/s [2024-10-13T12:20:45.691Z] 5500.57 IOPS, 21.49 MiB/s [2024-10-13T12:20:47.073Z] 5536.38 IOPS, 21.63 MiB/s [2024-10-13T12:20:48.013Z] 5616.67 IOPS, 21.94 MiB/s [2024-10-13T12:20:48.013Z] 5652.20 IOPS, 22.08 MiB/s 00:24:44.306 Latency(us) 00:24:44.306 [2024-10-13T12:20:48.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.307 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:44.307 Verification LBA range: start 0x0 length 0x2000 00:24:44.307 TLSTESTn1 : 10.01 5659.53 22.11 0.00 0.00 22582.88 3831.87 60872.06 00:24:44.307 [2024-10-13T12:20:48.014Z] =================================================================================================================== 00:24:44.307 [2024-10-13T12:20:48.014Z] Total : 5659.53 22.11 0.00 0.00 22582.88 3831.87 60872.06 00:24:44.307 { 00:24:44.307 "results": [ 00:24:44.307 { 00:24:44.307 "job": "TLSTESTn1", 00:24:44.307 "core_mask": "0x4", 00:24:44.307 "workload": "verify", 00:24:44.307 "status": "finished", 00:24:44.307 "verify_range": { 00:24:44.307 "start": 0, 00:24:44.307 "length": 8192 00:24:44.307 }, 00:24:44.307 "queue_depth": 128, 00:24:44.307 "io_size": 4096, 00:24:44.307 "runtime": 10.009489, 00:24:44.307 "iops": 5659.52967229396, 00:24:44.307 "mibps": 22.107537782398282, 00:24:44.307 "io_failed": 0, 00:24:44.307 "io_timeout": 0, 00:24:44.307 "avg_latency_us": 22582.88495960822, 00:24:44.307 "min_latency_us": 3831.874373538256, 00:24:44.307 "max_latency_us": 60872.06147677915 00:24:44.307 } 00:24:44.307 ], 00:24:44.307 "core_count": 1 00:24:44.307 } 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1756296 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1756296 ']' 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1756296 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1756296 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1756296' 00:24:44.307 killing process with pid 1756296 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1756296 00:24:44.307 Received shutdown signal, test time was about 10.000000 seconds 00:24:44.307 00:24:44.307 Latency(us) 00:24:44.307 [2024-10-13T12:20:48.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.307 [2024-10-13T12:20:48.014Z] =================================================================================================================== 00:24:44.307 [2024-10-13T12:20:48.014Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1756296 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.45VpdRmhe4 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.45VpdRmhe4 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.45VpdRmhe4 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.45VpdRmhe4 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1758371 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1758371 /var/tmp/bdevperf.sock 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1758371 ']' 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:44.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:44.307 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:44.307 [2024-10-13 14:20:47.916448] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:24:44.307 [2024-10-13 14:20:47.916507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1758371 ] 00:24:44.567 [2024-10-13 14:20:48.050506] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:44.567 [2024-10-13 14:20:48.073843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.567 [2024-10-13 14:20:48.089451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.139 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:45.139 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:45.139 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.45VpdRmhe4 00:24:45.398 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:45.398 [2024-10-13 14:20:49.048577] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:45.399 [2024-10-13 14:20:49.052966] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:45.399 [2024-10-13 14:20:49.053618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac4af0 (107): Transport endpoint is not connected 00:24:45.399 [2024-10-13 14:20:49.054611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac4af0 (9): Bad file descriptor 00:24:45.399 [2024-10-13 14:20:49.055611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.399 [2024-10-13 14:20:49.055618] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:45.399 [2024-10-13 14:20:49.055624] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:45.399 [2024-10-13 14:20:49.055632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.399 request: 00:24:45.399 { 00:24:45.399 "name": "TLSTEST", 00:24:45.399 "trtype": "tcp", 00:24:45.399 "traddr": "10.0.0.2", 00:24:45.399 "adrfam": "ipv4", 00:24:45.399 "trsvcid": "4420", 00:24:45.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.399 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:45.399 "prchk_reftag": false, 00:24:45.399 "prchk_guard": false, 00:24:45.399 "hdgst": false, 00:24:45.399 "ddgst": false, 00:24:45.399 "psk": "key0", 00:24:45.399 "allow_unrecognized_csi": false, 00:24:45.399 "method": "bdev_nvme_attach_controller", 00:24:45.399 "req_id": 1 00:24:45.399 } 00:24:45.399 Got JSON-RPC error response 00:24:45.399 response: 00:24:45.399 { 00:24:45.399 "code": -5, 00:24:45.399 "message": "Input/output error" 00:24:45.399 } 00:24:45.399 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1758371 00:24:45.399 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1758371 ']' 00:24:45.399 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1758371 00:24:45.399 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:45.399 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:45.399 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1758371 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1758371' 00:24:45.659 killing process with pid 1758371 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1758371 00:24:45.659 Received shutdown signal, test time was about 10.000000 seconds 00:24:45.659 00:24:45.659 Latency(us) 00:24:45.659 [2024-10-13T12:20:49.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.659 [2024-10-13T12:20:49.366Z] =================================================================================================================== 00:24:45.659 [2024-10-13T12:20:49.366Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1758371 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yWKDShppKo 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yWKDShppKo 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yWKDShppKo 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yWKDShppKo 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1758720 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1758720 /var/tmp/bdevperf.sock 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1758720 ']' 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:45.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:45.659 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.659 [2024-10-13 14:20:49.289905] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:24:45.659 [2024-10-13 14:20:49.289961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1758720 ] 00:24:45.920 [2024-10-13 14:20:49.420461] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:45.920 [2024-10-13 14:20:49.469164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.920 [2024-10-13 14:20:49.483893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.490 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:46.490 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:46.490 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yWKDShppKo 00:24:46.750 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:24:46.750 [2024-10-13 14:20:50.434998] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:46.750 [2024-10-13 14:20:50.442454] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:46.750 [2024-10-13 14:20:50.442472] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:46.750 [2024-10-13 14:20:50.442491] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:46.750 [2024-10-13 14:20:50.443138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf01af0 (107): Transport endpoint is not connected 00:24:46.750 [2024-10-13 14:20:50.444132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf01af0 (9): Bad file descriptor 00:24:46.750 [2024-10-13 14:20:50.445131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.750 [2024-10-13 14:20:50.445137] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:46.750 [2024-10-13 14:20:50.445144] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:46.750 [2024-10-13 14:20:50.445153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.750 request: 00:24:46.750 { 00:24:46.750 "name": "TLSTEST", 00:24:46.750 "trtype": "tcp", 00:24:46.750 "traddr": "10.0.0.2", 00:24:46.750 "adrfam": "ipv4", 00:24:46.750 "trsvcid": "4420", 00:24:46.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.750 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:46.750 "prchk_reftag": false, 00:24:46.750 "prchk_guard": false, 00:24:46.750 "hdgst": false, 00:24:46.750 "ddgst": false, 00:24:46.750 "psk": "key0", 00:24:46.750 "allow_unrecognized_csi": false, 00:24:46.750 "method": "bdev_nvme_attach_controller", 00:24:46.750 "req_id": 1 00:24:46.750 } 00:24:46.750 Got JSON-RPC error response 00:24:46.750 response: 00:24:46.750 { 00:24:46.750 "code": -5, 00:24:46.750 "message": "Input/output error" 00:24:46.750 } 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1758720 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1758720 ']' 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1758720 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1758720 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1758720' 00:24:47.011 killing process with pid 1758720 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1758720 00:24:47.011 Received shutdown signal, test time was about 10.000000 seconds 00:24:47.011 00:24:47.011 Latency(us) 00:24:47.011 [2024-10-13T12:20:50.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.011 [2024-10-13T12:20:50.718Z] =================================================================================================================== 00:24:47.011 [2024-10-13T12:20:50.718Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1758720 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yWKDShppKo 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yWKDShppKo 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yWKDShppKo 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yWKDShppKo 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1759058 00:24:47.011 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:47.012 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:47.012 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1759058 /var/tmp/bdevperf.sock 00:24:47.012 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1759058 ']' 00:24:47.012 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.012 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:47.012 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.012 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:47.012 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.012 [2024-10-13 14:20:50.683730] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:24:47.012 [2024-10-13 14:20:50.683790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1759058 ] 00:24:47.272 [2024-10-13 14:20:50.814366] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:47.272 [2024-10-13 14:20:50.860285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.272 [2024-10-13 14:20:50.874854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.842 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:47.842 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:47.843 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yWKDShppKo 00:24:48.103 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:48.363 [2024-10-13 14:20:51.821783] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:48.363 [2024-10-13 14:20:51.832570] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:48.363 [2024-10-13 14:20:51.832587] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:48.363 [2024-10-13 14:20:51.832606] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:48.363 [2024-10-13 14:20:51.832896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdcaf0 (107): Transport endpoint is not connected 00:24:48.363 [2024-10-13 14:20:51.833890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdcaf0 (9): Bad file descriptor 00:24:48.363 [2024-10-13 14:20:51.834889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:48.363 [2024-10-13 14:20:51.834896] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:48.363 [2024-10-13 14:20:51.834902] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:24:48.363 [2024-10-13 14:20:51.834910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:48.363 request: 00:24:48.363 { 00:24:48.363 "name": "TLSTEST", 00:24:48.363 "trtype": "tcp", 00:24:48.363 "traddr": "10.0.0.2", 00:24:48.363 "adrfam": "ipv4", 00:24:48.363 "trsvcid": "4420", 00:24:48.364 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:48.364 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:48.364 "prchk_reftag": false, 00:24:48.364 "prchk_guard": false, 00:24:48.364 "hdgst": false, 00:24:48.364 "ddgst": false, 00:24:48.364 "psk": "key0", 00:24:48.364 "allow_unrecognized_csi": false, 00:24:48.364 "method": "bdev_nvme_attach_controller", 00:24:48.364 "req_id": 1 00:24:48.364 } 00:24:48.364 Got JSON-RPC error response 00:24:48.364 response: 00:24:48.364 { 00:24:48.364 "code": -5, 00:24:48.364 "message": "Input/output error" 00:24:48.364 } 00:24:48.364 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1759058 00:24:48.364 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1759058 ']' 00:24:48.364 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1759058 00:24:48.364 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:48.364 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:48.364 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1759058 00:24:48.364 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:48.364 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:48.364 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1759058' 00:24:48.364 killing process with pid 1759058 00:24:48.364 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1759058 00:24:48.364 Received shutdown signal, test time was about 10.000000 seconds 00:24:48.364 00:24:48.364 Latency(us) 00:24:48.364 [2024-10-13T12:20:52.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.364 [2024-10-13T12:20:52.071Z] =================================================================================================================== 00:24:48.364 [2024-10-13T12:20:52.071Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:48.364 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1759058 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1759321 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1759321 /var/tmp/bdevperf.sock 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1759321 ']' 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:48.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:48.364 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.625 [2024-10-13 14:20:52.071416] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:24:48.625 [2024-10-13 14:20:52.071472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1759321 ] 00:24:48.625 [2024-10-13 14:20:52.201945] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:48.625 [2024-10-13 14:20:52.247666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.625 [2024-10-13 14:20:52.263188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:49.195 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:49.195 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:49.195 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:24:49.456 [2024-10-13 14:20:53.030182] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:24:49.456 [2024-10-13 14:20:53.030207] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:49.456 request: 00:24:49.456 { 00:24:49.456 "name": "key0", 00:24:49.456 "path": "", 00:24:49.456 "method": "keyring_file_add_key", 00:24:49.456 "req_id": 1 00:24:49.456 } 00:24:49.456 Got JSON-RPC error response 00:24:49.456 response: 00:24:49.456 { 00:24:49.456 "code": -1, 00:24:49.456 "message": "Operation not permitted" 00:24:49.456 } 00:24:49.456 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:49.716 [2024-10-13 14:20:53.206287] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:49.716 [2024-10-13 14:20:53.206308] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:49.716 request: 00:24:49.716 { 00:24:49.716 "name": "TLSTEST", 00:24:49.716 "trtype": "tcp", 00:24:49.716 "traddr": "10.0.0.2", 00:24:49.716 "adrfam": "ipv4", 00:24:49.716 "trsvcid": "4420", 00:24:49.716 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.716 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:49.716 "prchk_reftag": false, 00:24:49.716 "prchk_guard": false, 00:24:49.716 "hdgst": false, 00:24:49.716 "ddgst": false, 00:24:49.716 "psk": "key0", 00:24:49.716 "allow_unrecognized_csi": false, 00:24:49.716 "method": "bdev_nvme_attach_controller", 00:24:49.716 "req_id": 1 00:24:49.716 } 00:24:49.716 Got JSON-RPC error response 00:24:49.716 response: 00:24:49.716 { 00:24:49.716 "code": -126, 00:24:49.716 "message": "Required key not available" 00:24:49.716 } 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1759321 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1759321 ']' 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1759321 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1759321 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1759321' 00:24:49.716 killing process with pid 1759321 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1759321 00:24:49.716 Received shutdown signal, test time was about 10.000000 seconds 00:24:49.716 00:24:49.716 Latency(us) 00:24:49.716 [2024-10-13T12:20:53.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.716 [2024-10-13T12:20:53.423Z] =================================================================================================================== 00:24:49.716 [2024-10-13T12:20:53.423Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1759321 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1753299 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1753299 ']' 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1753299 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:49.716 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1753299 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1753299' 00:24:49.978 killing process with pid 1753299 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1753299 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1753299 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.TUXgXkn2vk 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.TUXgXkn2vk 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1759606 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1759606 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1759606 ']' 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:49.978 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:50.239 [2024-10-13 14:20:53.687467] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:24:50.239 [2024-10-13 14:20:53.687540] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.239 [2024-10-13 14:20:53.828182] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:50.239 [2024-10-13 14:20:53.875970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.239 [2024-10-13 14:20:53.897216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.239 [2024-10-13 14:20:53.897255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.239 [2024-10-13 14:20:53.897262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.239 [2024-10-13 14:20:53.897268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.239 [2024-10-13 14:20:53.897274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.239 [2024-10-13 14:20:53.897902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.809 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:50.809 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:50.809 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:50.809 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:50.809 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:51.077 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.077 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.TUXgXkn2vk 00:24:51.077 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TUXgXkn2vk 00:24:51.077 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:51.077 [2024-10-13 14:20:54.685006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.077 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:51.337 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:51.598 [2024-10-13 14:20:55.045057] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:51.598 [2024-10-13 14:20:55.045252] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.598 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:51.598 malloc0 00:24:51.598 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:51.859 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TUXgXkn2vk 00:24:52.119 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:52.119 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TUXgXkn2vk 00:24:52.119 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:52.119 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:52.119 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:52.119 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TUXgXkn2vk 00:24:52.119 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:52.119 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:52.119 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1760119 00:24:52.119 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:52.119 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1760119 /var/tmp/bdevperf.sock 00:24:52.119 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1760119 ']' 00:24:52.119 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:52.119 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:52.119 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:52.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:52.120 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:52.120 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.120 [2024-10-13 14:20:55.824836] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:24:52.120 [2024-10-13 14:20:55.824918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1760119 ] 00:24:52.380 [2024-10-13 14:20:55.961619] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:52.380 [2024-10-13 14:20:56.010480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.380 [2024-10-13 14:20:56.026619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.951 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:52.951 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:52.951 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TUXgXkn2vk 00:24:53.212 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:53.473 [2024-10-13 14:20:56.957611] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:53.473 TLSTESTn1 00:24:53.473 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:53.473 Running I/O for 10 seconds... 00:24:55.798 5864.00 IOPS, 22.91 MiB/s [2024-10-13T12:21:00.448Z] 5348.50 IOPS, 20.89 MiB/s [2024-10-13T12:21:01.390Z] 5309.00 IOPS, 20.74 MiB/s [2024-10-13T12:21:02.332Z] 5360.75 IOPS, 20.94 MiB/s [2024-10-13T12:21:03.272Z] 5416.00 IOPS, 21.16 MiB/s [2024-10-13T12:21:04.212Z] 5544.50 IOPS, 21.66 MiB/s [2024-10-13T12:21:05.153Z] 5538.29 IOPS, 21.63 MiB/s [2024-10-13T12:21:06.181Z] 5409.50 IOPS, 21.13 MiB/s [2024-10-13T12:21:07.183Z] 5515.44 IOPS, 21.54 MiB/s [2024-10-13T12:21:07.183Z] 5420.20 IOPS, 21.17 MiB/s 00:25:03.476 Latency(us) 00:25:03.476 [2024-10-13T12:21:07.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.476 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:03.476 Verification LBA range: start 0x0 length 0x2000 00:25:03.476 TLSTESTn1 : 10.02 5424.12 21.19 0.00 0.00 23564.35 5063.55 29122.25 00:25:03.476 [2024-10-13T12:21:07.183Z] =================================================================================================================== 00:25:03.476 [2024-10-13T12:21:07.183Z] Total : 5424.12 21.19 0.00 0.00 23564.35 5063.55 29122.25 00:25:03.476 { 00:25:03.476 "results": [ 00:25:03.476 { 00:25:03.476 "job": "TLSTESTn1", 00:25:03.476 "core_mask": "0x4", 00:25:03.476 "workload": "verify", 00:25:03.476 "status": "finished", 00:25:03.476 "verify_range": { 00:25:03.476 "start": 0, 00:25:03.476 "length": 8192 00:25:03.476 }, 00:25:03.476 "queue_depth": 128, 00:25:03.476 "io_size": 4096, 00:25:03.476 "runtime": 10.016193, 00:25:03.476 "iops": 5424.116727782702, 00:25:03.476 "mibps": 21.18795596790118, 00:25:03.476 "io_failed": 0, 00:25:03.476 "io_timeout": 0, 00:25:03.477 "avg_latency_us": 23564.353547381877, 00:25:03.477 "min_latency_us": 5063.548279318409, 00:25:03.477 "max_latency_us": 29122.245238890744 00:25:03.477 } 00:25:03.477 ], 00:25:03.477 "core_count": 1 00:25:03.477 } 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1760119 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1760119 ']' 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1760119 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1760119 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1760119' 00:25:03.738 killing process with pid 1760119 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1760119 00:25:03.738 Received shutdown signal, test time was about 10.000000 seconds 00:25:03.738 00:25:03.738 Latency(us) 00:25:03.738 [2024-10-13T12:21:07.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.738 [2024-10-13T12:21:07.445Z] =================================================================================================================== 00:25:03.738 [2024-10-13T12:21:07.445Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1760119 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.TUXgXkn2vk 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TUXgXkn2vk 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TUXgXkn2vk 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TUXgXkn2vk 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TUXgXkn2vk 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1762310 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1762310 /var/tmp/bdevperf.sock 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1762310 ']' 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:03.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:03.738 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:03.738 [2024-10-13 14:21:07.413465] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:25:03.738 [2024-10-13 14:21:07.413522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1762310 ] 00:25:03.999 [2024-10-13 14:21:07.544298] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:03.999 [2024-10-13 14:21:07.592030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.999 [2024-10-13 14:21:07.607729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:04.570 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:04.570 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:04.570 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TUXgXkn2vk 00:25:04.830 [2024-10-13 14:21:08.366864] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.TUXgXkn2vk': 0100666 00:25:04.830 [2024-10-13 14:21:08.366889] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:04.830 request: 00:25:04.830 { 00:25:04.830 "name": "key0", 00:25:04.830 "path": "/tmp/tmp.TUXgXkn2vk", 00:25:04.830 "method": "keyring_file_add_key", 00:25:04.830 "req_id": 1 00:25:04.830 } 00:25:04.830 Got JSON-RPC error response 00:25:04.830 response: 00:25:04.830 { 00:25:04.830 "code": -1, 00:25:04.830 "message": "Operation not permitted" 00:25:04.830 } 00:25:04.830 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:05.091 [2024-10-13 14:21:08.550966] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:05.091 [2024-10-13 14:21:08.550992] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:25:05.091 request: 00:25:05.091 { 00:25:05.091 "name": "TLSTEST", 00:25:05.091 "trtype": "tcp", 00:25:05.091 "traddr": "10.0.0.2", 00:25:05.091 "adrfam": "ipv4", 00:25:05.091 "trsvcid": "4420", 00:25:05.091 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:05.091 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:05.091 "prchk_reftag": false, 00:25:05.091 "prchk_guard": false, 00:25:05.091 "hdgst": false, 00:25:05.091 "ddgst": false, 00:25:05.091 "psk": "key0", 00:25:05.091 "allow_unrecognized_csi": false, 00:25:05.091 "method": "bdev_nvme_attach_controller", 00:25:05.091 "req_id": 1 00:25:05.091 } 00:25:05.091 Got JSON-RPC error response 00:25:05.091 response: 00:25:05.091 { 00:25:05.091 "code": -126, 00:25:05.091 "message": "Required key not available" 00:25:05.091 } 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1762310 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1762310 ']' 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1762310 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1762310 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1762310' 00:25:05.091 killing process with pid 1762310 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1762310 00:25:05.091 Received shutdown signal, test time was about 10.000000 seconds 00:25:05.091 00:25:05.091 Latency(us) 00:25:05.091 [2024-10-13T12:21:08.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.091 [2024-10-13T12:21:08.798Z] =================================================================================================================== 00:25:05.091 [2024-10-13T12:21:08.798Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1762310 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1759606 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1759606 ']' 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1759606 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:05.091 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1759606 00:25:05.352 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:05.352 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:05.352 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1759606' 00:25:05.352 killing process with pid 1759606 00:25:05.352 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1759606 00:25:05.352 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1759606 00:25:05.352 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:25:05.352 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:05.352 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:05.352 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:05.352 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1762833 00:25:05.352 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1762833 00:25:05.352 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:05.352 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1762833 ']' 00:25:05.352 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.352 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:05.352 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.352 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:05.352 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:05.352 [2024-10-13 14:21:08.965585] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:25:05.352 [2024-10-13 14:21:08.965640] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.612 [2024-10-13 14:21:09.105529] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:05.612 [2024-10-13 14:21:09.128184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.612 [2024-10-13 14:21:09.143112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.612 [2024-10-13 14:21:09.143138] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.612 [2024-10-13 14:21:09.143143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.612 [2024-10-13 14:21:09.143147] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.612 [2024-10-13 14:21:09.143152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.612 [2024-10-13 14:21:09.143635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.183 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:06.183 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:06.183 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:06.183 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:06.183 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:06.183 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.183 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.TUXgXkn2vk 00:25:06.183 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:06.183 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.TUXgXkn2vk 00:25:06.183 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:25:06.183 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:06.183 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:25:06.183 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:06.183 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.TUXgXkn2vk 00:25:06.183 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TUXgXkn2vk 00:25:06.183 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:06.443 [2024-10-13 14:21:09.964039] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.443 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:06.704 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:06.704 [2024-10-13 14:21:10.324093] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:06.704 [2024-10-13 14:21:10.324288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.704 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:06.965 malloc0 00:25:06.965 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:07.226 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TUXgXkn2vk 00:25:07.226 [2024-10-13 14:21:10.849828] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.TUXgXkn2vk': 0100666 00:25:07.226 [2024-10-13 14:21:10.849848] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:07.226 request: 00:25:07.226 { 00:25:07.226 "name": "key0", 00:25:07.226 "path": "/tmp/tmp.TUXgXkn2vk", 00:25:07.226 "method": "keyring_file_add_key", 00:25:07.226 "req_id": 1 00:25:07.226 } 00:25:07.226 Got JSON-RPC error response 00:25:07.226 response: 00:25:07.226 { 00:25:07.226 "code": -1, 00:25:07.226 "message": "Operation not permitted" 00:25:07.226 } 00:25:07.226 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:07.487 [2024-10-13 14:21:11.029877] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:25:07.487 [2024-10-13 14:21:11.029905] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:25:07.487 request: 00:25:07.487 { 00:25:07.487 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.487 "host": "nqn.2016-06.io.spdk:host1", 00:25:07.487 "psk": "key0", 00:25:07.487 "method": "nvmf_subsystem_add_host", 00:25:07.487 "req_id": 1 00:25:07.487 } 00:25:07.487 Got JSON-RPC error response 00:25:07.487 response: 00:25:07.487 { 00:25:07.487 "code": -32603, 00:25:07.487 "message": "Internal error" 00:25:07.487 } 00:25:07.487 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:07.487 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:07.487 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:07.487 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:07.487 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1762833 00:25:07.487 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1762833 ']' 00:25:07.487 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1762833 00:25:07.487 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:07.487 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:07.487 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1762833 00:25:07.487 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:07.487 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:07.487 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1762833' 00:25:07.487 killing process with pid 1762833 00:25:07.487 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1762833 00:25:07.487 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1762833 00:25:07.749 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.TUXgXkn2vk 00:25:07.749 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:25:07.749 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:07.749 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:07.749 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:07.749 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1763661 00:25:07.749 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1763661 00:25:07.749 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:07.749 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1763661 ']' 00:25:07.749 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.749 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:07.749 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.749 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:07.749 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:07.749 [2024-10-13 14:21:11.298363] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:25:07.749 [2024-10-13 14:21:11.298422] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.749 [2024-10-13 14:21:11.436500] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:08.010 [2024-10-13 14:21:11.484635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.010 [2024-10-13 14:21:11.505557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.010 [2024-10-13 14:21:11.505594] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.010 [2024-10-13 14:21:11.505600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.010 [2024-10-13 14:21:11.505605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.010 [2024-10-13 14:21:11.505610] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.010 [2024-10-13 14:21:11.506238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.580 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:08.580 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:08.580 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:08.581 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:08.581 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.581 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.581 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.TUXgXkn2vk 00:25:08.581 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TUXgXkn2vk 00:25:08.581 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:08.842 [2024-10-13 14:21:12.301658] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.842 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:08.842 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:09.103 [2024-10-13 14:21:12.621694] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:09.103 [2024-10-13 14:21:12.621892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.103 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:09.364 malloc0 00:25:09.364 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:09.364 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TUXgXkn2vk 00:25:09.625 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:09.625 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:09.625 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1764103 00:25:09.625 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:09.625 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1764103 /var/tmp/bdevperf.sock 00:25:09.625 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1764103 ']' 00:25:09.625 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:09.625 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:09.625 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:09.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:09.625 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:09.625 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.886 [2024-10-13 14:21:13.350023] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:25:09.886 [2024-10-13 14:21:13.350079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1764103 ] 00:25:09.886 [2024-10-13 14:21:13.480133] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:09.886 [2024-10-13 14:21:13.528842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.886 [2024-10-13 14:21:13.544912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:10.457 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:10.457 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:10.457 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TUXgXkn2vk 00:25:10.718 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:10.980 [2024-10-13 14:21:14.455943] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:10.980 TLSTESTn1 00:25:10.980 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:25:11.241 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:25:11.241 "subsystems": [ 00:25:11.241 { 00:25:11.241 "subsystem": "keyring", 00:25:11.241 "config": [ 00:25:11.241 { 00:25:11.241 "method": "keyring_file_add_key", 00:25:11.241 "params": { 00:25:11.241 "name": "key0", 00:25:11.241 "path": "/tmp/tmp.TUXgXkn2vk" 00:25:11.241 } 00:25:11.241 } 00:25:11.241 ] 00:25:11.241 }, 00:25:11.241 { 00:25:11.241 "subsystem": "iobuf", 00:25:11.241 "config": [ 00:25:11.241 { 00:25:11.241 "method": "iobuf_set_options", 00:25:11.241 "params": { 00:25:11.241 "small_pool_count": 8192, 00:25:11.241 "large_pool_count": 1024, 00:25:11.241 "small_bufsize": 8192, 00:25:11.241 "large_bufsize": 135168 00:25:11.241 } 00:25:11.241 } 00:25:11.241 ] 00:25:11.241 }, 00:25:11.241 { 00:25:11.241 "subsystem": "sock", 00:25:11.241 "config": [ 00:25:11.241 { 00:25:11.241 "method": "sock_set_default_impl", 00:25:11.241 "params": { 00:25:11.241 "impl_name": "posix" 00:25:11.241 } 00:25:11.241 }, 00:25:11.241 { 00:25:11.241 "method": "sock_impl_set_options", 00:25:11.241 "params": { 00:25:11.241 "impl_name": "ssl", 00:25:11.241 "recv_buf_size": 4096, 00:25:11.241 "send_buf_size": 4096, 00:25:11.241 "enable_recv_pipe": true, 00:25:11.241 "enable_quickack": false, 00:25:11.241 "enable_placement_id": 0, 00:25:11.241 "enable_zerocopy_send_server": true, 00:25:11.241 "enable_zerocopy_send_client": false, 00:25:11.241 "zerocopy_threshold": 0, 00:25:11.241 "tls_version": 0, 00:25:11.241 "enable_ktls": false 00:25:11.241 } 00:25:11.241 }, 00:25:11.241 { 00:25:11.241 "method": "sock_impl_set_options", 00:25:11.241 "params": { 00:25:11.242 "impl_name": "posix", 00:25:11.242 "recv_buf_size": 2097152, 00:25:11.242 "send_buf_size": 2097152, 00:25:11.242 "enable_recv_pipe": true, 00:25:11.242 "enable_quickack": false, 00:25:11.242 "enable_placement_id": 0, 00:25:11.242 "enable_zerocopy_send_server": true, 00:25:11.242 "enable_zerocopy_send_client": false, 00:25:11.242 "zerocopy_threshold": 0, 00:25:11.242 "tls_version": 0, 00:25:11.242 "enable_ktls": false 00:25:11.242 } 00:25:11.242 } 00:25:11.242 ] 00:25:11.242 }, 00:25:11.242 { 00:25:11.242 "subsystem": "vmd", 00:25:11.242 "config": [] 00:25:11.242 }, 00:25:11.242 { 00:25:11.242 "subsystem": "accel", 00:25:11.242 "config": [ 00:25:11.242 { 00:25:11.242 "method": "accel_set_options", 00:25:11.242 "params": { 00:25:11.242 "small_cache_size": 128, 00:25:11.242 "large_cache_size": 16, 00:25:11.242 "task_count": 2048, 00:25:11.242 "sequence_count": 2048, 00:25:11.242 "buf_count": 2048 00:25:11.242 } 00:25:11.242 } 00:25:11.242 ] 00:25:11.242 }, 00:25:11.242 { 00:25:11.242 "subsystem": "bdev", 00:25:11.242 "config": [ 00:25:11.242 { 00:25:11.242 "method": "bdev_set_options", 00:25:11.242 "params": { 00:25:11.242 "bdev_io_pool_size": 65535, 00:25:11.242 "bdev_io_cache_size": 256, 00:25:11.242 "bdev_auto_examine": true, 00:25:11.242 "iobuf_small_cache_size": 128, 00:25:11.242 "iobuf_large_cache_size": 16 00:25:11.242 } 00:25:11.242 }, 00:25:11.242 { 00:25:11.242 "method": "bdev_raid_set_options", 00:25:11.242 "params": { 00:25:11.242 "process_window_size_kb": 1024, 00:25:11.242 "process_max_bandwidth_mb_sec": 0 00:25:11.242 } 00:25:11.242 }, 00:25:11.242 { 00:25:11.242 "method": "bdev_iscsi_set_options", 00:25:11.242 "params": { 00:25:11.242 "timeout_sec": 30 00:25:11.242 } 00:25:11.242 }, 00:25:11.242 { 00:25:11.242 "method": "bdev_nvme_set_options", 00:25:11.242 "params": { 00:25:11.242 "action_on_timeout": "none", 00:25:11.242 "timeout_us": 0, 00:25:11.242 "timeout_admin_us": 0, 00:25:11.242 "keep_alive_timeout_ms": 10000, 00:25:11.242 "arbitration_burst": 0, 00:25:11.242 "low_priority_weight": 0, 00:25:11.242 "medium_priority_weight": 0, 00:25:11.242 "high_priority_weight": 0, 00:25:11.242 "nvme_adminq_poll_period_us": 10000, 00:25:11.242 "nvme_ioq_poll_period_us": 0, 00:25:11.242 "io_queue_requests": 0, 00:25:11.242 "delay_cmd_submit": true, 00:25:11.242 "transport_retry_count": 4, 00:25:11.242 "bdev_retry_count": 3, 00:25:11.242 "transport_ack_timeout": 0, 00:25:11.242 "ctrlr_loss_timeout_sec": 0, 00:25:11.242 "reconnect_delay_sec": 0, 00:25:11.242 "fast_io_fail_timeout_sec": 0, 00:25:11.242 "disable_auto_failback": false, 00:25:11.242 "generate_uuids": false, 00:25:11.242 "transport_tos": 0, 00:25:11.242 "nvme_error_stat": false, 00:25:11.242 "rdma_srq_size": 0, 00:25:11.242 "io_path_stat": false, 00:25:11.242 "allow_accel_sequence": false, 00:25:11.242 "rdma_max_cq_size": 0, 00:25:11.242 "rdma_cm_event_timeout_ms": 0, 00:25:11.242 "dhchap_digests": [ 00:25:11.242 "sha256", 00:25:11.242 "sha384", 00:25:11.242 "sha512" 00:25:11.242 ], 00:25:11.242 "dhchap_dhgroups": [ 00:25:11.242 "null", 00:25:11.242 "ffdhe2048", 00:25:11.242 "ffdhe3072", 00:25:11.242 "ffdhe4096", 00:25:11.242 "ffdhe6144", 00:25:11.242 "ffdhe8192" 00:25:11.242 ] 00:25:11.242 } 00:25:11.242 }, 00:25:11.242 { 00:25:11.242 "method": "bdev_nvme_set_hotplug", 00:25:11.242 "params": { 00:25:11.242 "period_us": 100000, 00:25:11.242 "enable": false 00:25:11.242 } 00:25:11.242 }, 00:25:11.242 { 00:25:11.242 "method": "bdev_malloc_create", 00:25:11.242 "params": { 00:25:11.242 "name": "malloc0", 00:25:11.242 "num_blocks": 8192, 00:25:11.242 "block_size": 4096, 00:25:11.242 "physical_block_size": 4096, 00:25:11.242 "uuid": "ca8e03d7-1eca-4b36-87c1-65f7f0b5fe63", 00:25:11.242 "optimal_io_boundary": 0, 00:25:11.242 "md_size": 0, 00:25:11.242 "dif_type": 0, 00:25:11.242 "dif_is_head_of_md": false, 00:25:11.242 "dif_pi_format": 0 00:25:11.242 } 00:25:11.242 }, 00:25:11.242 { 00:25:11.242 "method": "bdev_wait_for_examine" 00:25:11.242 } 00:25:11.242 ] 00:25:11.242 }, 00:25:11.242 { 00:25:11.242 "subsystem": "nbd", 00:25:11.242 "config": [] 00:25:11.242 }, 00:25:11.242 { 00:25:11.242 "subsystem": "scheduler", 00:25:11.242 "config": [ 00:25:11.242 { 00:25:11.242 "method": "framework_set_scheduler", 00:25:11.242 "params": { 00:25:11.242 "name": "static" 00:25:11.242 } 00:25:11.242 } 00:25:11.242 ] 00:25:11.242 }, 00:25:11.242 { 00:25:11.242 "subsystem": "nvmf", 00:25:11.242 "config": [ 00:25:11.242 { 00:25:11.242 "method": "nvmf_set_config", 00:25:11.242 "params": { 00:25:11.242 "discovery_filter": "match_any", 00:25:11.242 "admin_cmd_passthru": { 00:25:11.242 "identify_ctrlr": false 00:25:11.242 }, 00:25:11.242 "dhchap_digests": [ 00:25:11.242 "sha256", 00:25:11.242 "sha384", 00:25:11.242 "sha512" 00:25:11.242 ], 00:25:11.242 "dhchap_dhgroups": [ 00:25:11.242 "null", 00:25:11.242 "ffdhe2048", 00:25:11.242 "ffdhe3072", 00:25:11.242 "ffdhe4096", 00:25:11.242 "ffdhe6144", 00:25:11.242 "ffdhe8192" 00:25:11.242 ] 00:25:11.242 } 00:25:11.242 }, 00:25:11.242 { 00:25:11.242 "method": "nvmf_set_max_subsystems", 00:25:11.242 "params": { 00:25:11.242 "max_subsystems": 1024 00:25:11.242 } 00:25:11.242 }, 00:25:11.242 { 00:25:11.242 "method": "nvmf_set_crdt", 00:25:11.242 "params": { 00:25:11.242 "crdt1": 0, 00:25:11.242 "crdt2": 0, 00:25:11.242 "crdt3": 0 00:25:11.242 } 00:25:11.242 }, 00:25:11.242 { 00:25:11.242 "method": "nvmf_create_transport", 00:25:11.242 "params": { 00:25:11.242 "trtype": "TCP", 00:25:11.242 "max_queue_depth": 128, 00:25:11.242 "max_io_qpairs_per_ctrlr": 127, 00:25:11.242 "in_capsule_data_size": 4096, 00:25:11.242 "max_io_size": 131072, 00:25:11.242 "io_unit_size": 131072, 00:25:11.242 "max_aq_depth": 128, 00:25:11.242 "num_shared_buffers": 511, 00:25:11.242 "buf_cache_size": 4294967295, 00:25:11.242 "dif_insert_or_strip": false, 00:25:11.242 "zcopy": false, 00:25:11.242 "c2h_success": false, 00:25:11.242 "sock_priority": 0, 00:25:11.242 "abort_timeout_sec": 1, 00:25:11.242 "ack_timeout": 0, 00:25:11.242 "data_wr_pool_size": 0 00:25:11.242 } 00:25:11.242 }, 00:25:11.242 { 00:25:11.242 "method": "nvmf_create_subsystem", 00:25:11.242 "params": { 00:25:11.242 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.242 "allow_any_host": false, 00:25:11.242 "serial_number": "SPDK00000000000001", 00:25:11.242 "model_number": "SPDK bdev Controller", 00:25:11.242 "max_namespaces": 10, 00:25:11.242 "min_cntlid": 1, 00:25:11.242 "max_cntlid": 65519, 00:25:11.242 "ana_reporting": false 00:25:11.242 } 00:25:11.242 }, 00:25:11.242 { 00:25:11.242 "method": "nvmf_subsystem_add_host", 00:25:11.242 "params": { 00:25:11.242 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.242 "host": "nqn.2016-06.io.spdk:host1", 00:25:11.242 "psk": "key0" 00:25:11.242 } 00:25:11.242 }, 00:25:11.242 { 00:25:11.242 "method": "nvmf_subsystem_add_ns", 00:25:11.242 "params": { 00:25:11.242 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.242 "namespace": { 00:25:11.242 "nsid": 1, 00:25:11.242 "bdev_name": "malloc0", 00:25:11.242 "nguid": "CA8E03D71ECA4B3687C165F7F0B5FE63", 00:25:11.242 "uuid": "ca8e03d7-1eca-4b36-87c1-65f7f0b5fe63", 00:25:11.242 "no_auto_visible": false 00:25:11.242 } 00:25:11.242 } 00:25:11.242 }, 00:25:11.242 { 00:25:11.242 "method": "nvmf_subsystem_add_listener", 00:25:11.242 "params": { 00:25:11.242 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.242 "listen_address": { 00:25:11.242 "trtype": "TCP", 00:25:11.242 "adrfam": "IPv4", 00:25:11.242 "traddr": "10.0.0.2", 00:25:11.242 "trsvcid": "4420" 00:25:11.242 }, 00:25:11.242 "secure_channel": true 00:25:11.242 } 00:25:11.242 } 00:25:11.242 ] 00:25:11.242 } 00:25:11.242 ] 00:25:11.242 }' 00:25:11.243 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:11.503 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:25:11.503 "subsystems": [ 00:25:11.503 { 00:25:11.503 "subsystem": "keyring", 00:25:11.503 "config": [ 00:25:11.503 { 00:25:11.503 "method": "keyring_file_add_key", 00:25:11.503 "params": { 00:25:11.503 "name": "key0", 00:25:11.503 "path": "/tmp/tmp.TUXgXkn2vk" 00:25:11.503 } 00:25:11.503 } 00:25:11.503 ] 00:25:11.503 }, 00:25:11.503 { 00:25:11.503 "subsystem": "iobuf", 00:25:11.503 "config": [ 00:25:11.503 { 00:25:11.503 "method": "iobuf_set_options", 00:25:11.503 "params": { 00:25:11.503 "small_pool_count": 8192, 00:25:11.503 "large_pool_count": 1024, 00:25:11.503 "small_bufsize": 8192, 00:25:11.503 "large_bufsize": 135168 00:25:11.503 } 00:25:11.503 } 00:25:11.503 ] 00:25:11.503 }, 00:25:11.503 { 00:25:11.503 "subsystem": "sock", 00:25:11.503 "config": [ 00:25:11.503 { 00:25:11.503 "method": "sock_set_default_impl", 00:25:11.503 "params": { 00:25:11.503 "impl_name": "posix" 00:25:11.503 } 00:25:11.503 }, 00:25:11.503 { 00:25:11.503 "method": "sock_impl_set_options", 00:25:11.503 "params": { 00:25:11.503 "impl_name": "ssl", 00:25:11.503 "recv_buf_size": 4096, 00:25:11.503 "send_buf_size": 4096, 00:25:11.503 "enable_recv_pipe": true, 00:25:11.503 "enable_quickack": false, 00:25:11.503 "enable_placement_id": 0, 00:25:11.503 "enable_zerocopy_send_server": true, 00:25:11.503 "enable_zerocopy_send_client": false, 00:25:11.503 "zerocopy_threshold": 0, 00:25:11.503 "tls_version": 0, 00:25:11.503 "enable_ktls": false 00:25:11.503 } 00:25:11.503 }, 00:25:11.504 { 00:25:11.504 "method": "sock_impl_set_options", 00:25:11.504 "params": { 00:25:11.504 "impl_name": "posix", 00:25:11.504 "recv_buf_size": 2097152, 00:25:11.504 "send_buf_size": 2097152, 00:25:11.504 "enable_recv_pipe": true, 00:25:11.504 "enable_quickack": false, 00:25:11.504 "enable_placement_id": 0, 00:25:11.504 "enable_zerocopy_send_server": true, 00:25:11.504 "enable_zerocopy_send_client": false, 00:25:11.504 "zerocopy_threshold": 0, 00:25:11.504 "tls_version": 0, 00:25:11.504 "enable_ktls": false 00:25:11.504 } 00:25:11.504 } 00:25:11.504 ] 00:25:11.504 }, 00:25:11.504 { 00:25:11.504 "subsystem": "vmd", 00:25:11.504 "config": [] 00:25:11.504 }, 00:25:11.504 { 00:25:11.504 "subsystem": "accel", 00:25:11.504 "config": [ 00:25:11.504 { 00:25:11.504 "method": "accel_set_options", 00:25:11.504 "params": { 00:25:11.504 "small_cache_size": 128, 00:25:11.504 "large_cache_size": 16, 00:25:11.504 "task_count": 2048, 00:25:11.504 "sequence_count": 2048, 00:25:11.504 "buf_count": 2048 00:25:11.504 } 00:25:11.504 } 00:25:11.504 ] 00:25:11.504 }, 00:25:11.504 { 00:25:11.504 "subsystem": "bdev", 00:25:11.504 "config": [ 00:25:11.504 { 00:25:11.504 "method": "bdev_set_options", 00:25:11.504 "params": { 00:25:11.504 "bdev_io_pool_size": 65535, 00:25:11.504 "bdev_io_cache_size": 256, 00:25:11.504 "bdev_auto_examine": true, 00:25:11.504 "iobuf_small_cache_size": 128, 00:25:11.504 "iobuf_large_cache_size": 16 00:25:11.504 } 00:25:11.504 }, 00:25:11.504 { 00:25:11.504 "method": "bdev_raid_set_options", 00:25:11.504 "params": { 00:25:11.504 "process_window_size_kb": 1024, 00:25:11.504 "process_max_bandwidth_mb_sec": 0 00:25:11.504 } 00:25:11.504 }, 00:25:11.504 { 00:25:11.504 "method": "bdev_iscsi_set_options", 00:25:11.504 "params": { 00:25:11.504 "timeout_sec": 30 00:25:11.504 } 00:25:11.504 }, 00:25:11.504 { 00:25:11.504 "method": "bdev_nvme_set_options", 00:25:11.504 "params": { 00:25:11.504 "action_on_timeout": "none", 00:25:11.504 "timeout_us": 0, 00:25:11.504 "timeout_admin_us": 0, 00:25:11.504 "keep_alive_timeout_ms": 10000, 00:25:11.504 "arbitration_burst": 0, 00:25:11.504 "low_priority_weight": 0, 00:25:11.504 "medium_priority_weight": 0, 00:25:11.504 "high_priority_weight": 0, 00:25:11.504 "nvme_adminq_poll_period_us": 10000, 00:25:11.504 "nvme_ioq_poll_period_us": 0, 00:25:11.504 "io_queue_requests": 512, 00:25:11.504 "delay_cmd_submit": true, 00:25:11.504 "transport_retry_count": 4, 00:25:11.504 "bdev_retry_count": 3, 00:25:11.504 "transport_ack_timeout": 0, 00:25:11.504 "ctrlr_loss_timeout_sec": 0, 00:25:11.504 "reconnect_delay_sec": 0, 00:25:11.504 "fast_io_fail_timeout_sec": 0, 00:25:11.504 "disable_auto_failback": false, 00:25:11.504 "generate_uuids": false, 00:25:11.504 "transport_tos": 0, 00:25:11.504 "nvme_error_stat": false, 00:25:11.504 "rdma_srq_size": 0, 00:25:11.504 "io_path_stat": false, 00:25:11.504 "allow_accel_sequence": false, 00:25:11.504 "rdma_max_cq_size": 0, 00:25:11.504 "rdma_cm_event_timeout_ms": 0, 00:25:11.504 "dhchap_digests": [ 00:25:11.504 "sha256", 00:25:11.504 "sha384", 00:25:11.504 "sha512" 00:25:11.504 ], 00:25:11.504 "dhchap_dhgroups": [ 00:25:11.504 "null", 00:25:11.504 "ffdhe2048", 00:25:11.504 "ffdhe3072", 00:25:11.504 "ffdhe4096", 00:25:11.504 "ffdhe6144", 00:25:11.504 "ffdhe8192" 00:25:11.504 ] 00:25:11.504 } 00:25:11.504 }, 00:25:11.504 { 00:25:11.504 "method": "bdev_nvme_attach_controller", 00:25:11.504 "params": { 00:25:11.504 "name": "TLSTEST", 00:25:11.504 "trtype": "TCP", 00:25:11.504 "adrfam": "IPv4", 00:25:11.504 "traddr": "10.0.0.2", 00:25:11.504 "trsvcid": "4420", 00:25:11.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.504 "prchk_reftag": false, 00:25:11.504 "prchk_guard": false, 00:25:11.504 "ctrlr_loss_timeout_sec": 0, 00:25:11.504 "reconnect_delay_sec": 0, 00:25:11.504 "fast_io_fail_timeout_sec": 0, 00:25:11.504 "psk": "key0", 00:25:11.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:11.504 "hdgst": false, 00:25:11.504 "ddgst": false, 00:25:11.504 "multipath": "multipath" 00:25:11.504 } 00:25:11.504 }, 00:25:11.504 { 00:25:11.504 "method": "bdev_nvme_set_hotplug", 00:25:11.504 "params": { 00:25:11.504 "period_us": 100000, 00:25:11.504 "enable": false 00:25:11.504 } 00:25:11.504 }, 00:25:11.504 { 00:25:11.504 "method": "bdev_wait_for_examine" 00:25:11.504 } 00:25:11.504 ] 00:25:11.504 }, 00:25:11.504 { 00:25:11.504 "subsystem": "nbd", 00:25:11.504 "config": [] 00:25:11.504 } 00:25:11.504 ] 00:25:11.504 }' 00:25:11.504 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1764103 00:25:11.504 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1764103 ']' 00:25:11.504 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1764103 00:25:11.504 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:11.504 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:11.504 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1764103 00:25:11.504 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:11.504 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:11.504 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1764103' 00:25:11.504 killing process with pid 1764103 00:25:11.504 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1764103 00:25:11.504 Received shutdown signal, test time was about 10.000000 seconds 00:25:11.504 00:25:11.504 Latency(us) 00:25:11.504 [2024-10-13T12:21:15.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.504 [2024-10-13T12:21:15.211Z] =================================================================================================================== 00:25:11.504 [2024-10-13T12:21:15.211Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:11.504 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1764103 00:25:11.504 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1763661 00:25:11.504 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1763661 ']' 00:25:11.504 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1763661 00:25:11.504 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:11.765 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:11.765 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1763661 00:25:11.765 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:11.765 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:11.765 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1763661' 00:25:11.765 killing process with pid 1763661 00:25:11.765 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1763661 00:25:11.765 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1763661 00:25:11.765 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:25:11.765 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:11.765 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:11.765 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:11.765 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:25:11.765 "subsystems": [ 00:25:11.765 { 00:25:11.765 "subsystem": "keyring", 00:25:11.765 "config": [ 00:25:11.765 { 00:25:11.765 "method": "keyring_file_add_key", 00:25:11.765 "params": { 00:25:11.765 "name": "key0", 00:25:11.765 "path": "/tmp/tmp.TUXgXkn2vk" 00:25:11.765 } 00:25:11.765 } 00:25:11.766 ] 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "subsystem": "iobuf", 00:25:11.766 "config": [ 00:25:11.766 { 00:25:11.766 "method": "iobuf_set_options", 00:25:11.766 "params": { 00:25:11.766 "small_pool_count": 8192, 00:25:11.766 "large_pool_count": 1024, 00:25:11.766 "small_bufsize": 8192, 00:25:11.766 "large_bufsize": 135168 00:25:11.766 } 00:25:11.766 } 00:25:11.766 ] 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "subsystem": "sock", 00:25:11.766 "config": [ 00:25:11.766 { 00:25:11.766 "method": "sock_set_default_impl", 00:25:11.766 "params": { 00:25:11.766 "impl_name": "posix" 00:25:11.766 } 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "method": "sock_impl_set_options", 00:25:11.766 "params": { 00:25:11.766 "impl_name": "ssl", 00:25:11.766 "recv_buf_size": 4096, 00:25:11.766 "send_buf_size": 4096, 00:25:11.766 "enable_recv_pipe": true, 00:25:11.766 "enable_quickack": false, 00:25:11.766 "enable_placement_id": 0, 00:25:11.766 "enable_zerocopy_send_server": true, 00:25:11.766 "enable_zerocopy_send_client": false, 00:25:11.766 "zerocopy_threshold": 0, 00:25:11.766 "tls_version": 0, 00:25:11.766 "enable_ktls": false 00:25:11.766 } 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "method": "sock_impl_set_options", 00:25:11.766 "params": { 00:25:11.766 "impl_name": "posix", 00:25:11.766 "recv_buf_size": 2097152, 00:25:11.766 "send_buf_size": 2097152, 00:25:11.766 "enable_recv_pipe": true, 00:25:11.766 "enable_quickack": false, 00:25:11.766 "enable_placement_id": 0, 00:25:11.766 "enable_zerocopy_send_server": true, 00:25:11.766 "enable_zerocopy_send_client": false, 00:25:11.766 "zerocopy_threshold": 0, 00:25:11.766 "tls_version": 0, 00:25:11.766 "enable_ktls": false 00:25:11.766 } 00:25:11.766 } 00:25:11.766 ] 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "subsystem": "vmd", 00:25:11.766 "config": [] 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "subsystem": "accel", 00:25:11.766 "config": [ 00:25:11.766 { 00:25:11.766 "method": "accel_set_options", 00:25:11.766 "params": { 00:25:11.766 "small_cache_size": 128, 00:25:11.766 "large_cache_size": 16, 00:25:11.766 "task_count": 2048, 00:25:11.766 "sequence_count": 2048, 00:25:11.766 "buf_count": 2048 00:25:11.766 } 00:25:11.766 } 00:25:11.766 ] 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "subsystem": "bdev", 00:25:11.766 "config": [ 00:25:11.766 { 00:25:11.766 "method": "bdev_set_options", 00:25:11.766 "params": { 00:25:11.766 "bdev_io_pool_size": 65535, 00:25:11.766 "bdev_io_cache_size": 256, 00:25:11.766 "bdev_auto_examine": true, 00:25:11.766 "iobuf_small_cache_size": 128, 00:25:11.766 "iobuf_large_cache_size": 16 00:25:11.766 } 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "method": "bdev_raid_set_options", 00:25:11.766 "params": { 00:25:11.766 "process_window_size_kb": 1024, 00:25:11.766 "process_max_bandwidth_mb_sec": 0 00:25:11.766 } 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "method": "bdev_iscsi_set_options", 00:25:11.766 "params": { 00:25:11.766 "timeout_sec": 30 00:25:11.766 } 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "method": "bdev_nvme_set_options", 00:25:11.766 "params": { 00:25:11.766 "action_on_timeout": "none", 00:25:11.766 "timeout_us": 0, 00:25:11.766 "timeout_admin_us": 0, 00:25:11.766 "keep_alive_timeout_ms": 10000, 00:25:11.766 "arbitration_burst": 0, 00:25:11.766 "low_priority_weight": 0, 00:25:11.766 "medium_priority_weight": 0, 00:25:11.766 "high_priority_weight": 0, 00:25:11.766 "nvme_adminq_poll_period_us": 10000, 00:25:11.766 "nvme_ioq_poll_period_us": 0, 00:25:11.766 "io_queue_requests": 0, 00:25:11.766 "delay_cmd_submit": true, 00:25:11.766 "transport_retry_count": 4, 00:25:11.766 "bdev_retry_count": 3, 00:25:11.766 "transport_ack_timeout": 0, 00:25:11.766 "ctrlr_loss_timeout_sec": 0, 00:25:11.766 "reconnect_delay_sec": 0, 00:25:11.766 "fast_io_fail_timeout_sec": 0, 00:25:11.766 "disable_auto_failback": false, 00:25:11.766 "generate_uuids": false, 00:25:11.766 "transport_tos": 0, 00:25:11.766 "nvme_error_stat": false, 00:25:11.766 "rdma_srq_size": 0, 00:25:11.766 "io_path_stat": false, 00:25:11.766 "allow_accel_sequence": false, 00:25:11.766 "rdma_max_cq_size": 0, 00:25:11.766 "rdma_cm_event_timeout_ms": 0, 00:25:11.766 "dhchap_digests": [ 00:25:11.766 "sha256", 00:25:11.766 "sha384", 00:25:11.766 "sha512" 00:25:11.766 ], 00:25:11.766 "dhchap_dhgroups": [ 00:25:11.766 "null", 00:25:11.766 "ffdhe2048", 00:25:11.766 "ffdhe3072", 00:25:11.766 "ffdhe4096", 00:25:11.766 "ffdhe6144", 00:25:11.766 "ffdhe8192" 00:25:11.766 ] 00:25:11.766 } 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "method": "bdev_nvme_set_hotplug", 00:25:11.766 "params": { 00:25:11.766 "period_us": 100000, 00:25:11.766 "enable": false 00:25:11.766 } 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "method": "bdev_malloc_create", 00:25:11.766 "params": { 00:25:11.766 "name": "malloc0", 00:25:11.766 "num_blocks": 8192, 00:25:11.766 "block_size": 4096, 00:25:11.766 "physical_block_size": 4096, 00:25:11.766 "uuid": "ca8e03d7-1eca-4b36-87c1-65f7f0b5fe63", 00:25:11.766 "optimal_io_boundary": 0, 00:25:11.766 "md_size": 0, 00:25:11.766 "dif_type": 0, 00:25:11.766 "dif_is_head_of_md": false, 00:25:11.766 "dif_pi_format": 0 00:25:11.766 } 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "method": "bdev_wait_for_examine" 00:25:11.766 } 00:25:11.766 ] 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "subsystem": "nbd", 00:25:11.766 "config": [] 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "subsystem": "scheduler", 00:25:11.766 "config": [ 00:25:11.766 { 00:25:11.766 "method": "framework_set_scheduler", 00:25:11.766 "params": { 00:25:11.766 "name": "static" 00:25:11.766 } 00:25:11.766 } 00:25:11.766 ] 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "subsystem": "nvmf", 00:25:11.766 "config": [ 00:25:11.766 { 00:25:11.766 "method": "nvmf_set_config", 00:25:11.766 "params": { 00:25:11.766 "discovery_filter": "match_any", 00:25:11.766 "admin_cmd_passthru": { 00:25:11.766 "identify_ctrlr": false 00:25:11.766 }, 00:25:11.766 "dhchap_digests": [ 00:25:11.766 "sha256", 00:25:11.766 "sha384", 00:25:11.766 "sha512" 00:25:11.766 ], 00:25:11.766 "dhchap_dhgroups": [ 00:25:11.766 "null", 00:25:11.766 "ffdhe2048", 00:25:11.766 "ffdhe3072", 00:25:11.766 "ffdhe4096", 00:25:11.766 "ffdhe6144", 00:25:11.766 "ffdhe8192" 00:25:11.766 ] 00:25:11.766 } 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "method": "nvmf_set_max_subsystems", 00:25:11.766 "params": { 00:25:11.766 "max_subsystems": 1024 00:25:11.766 } 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "method": "nvmf_set_crdt", 00:25:11.766 "params": { 00:25:11.766 "crdt1": 0, 00:25:11.766 "crdt2": 0, 00:25:11.766 "crdt3": 0 00:25:11.766 } 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "method": "nvmf_create_transport", 00:25:11.766 "params": { 00:25:11.766 "trtype": "TCP", 00:25:11.766 "max_queue_depth": 128, 00:25:11.766 "max_io_qpairs_per_ctrlr": 127, 00:25:11.766 "in_capsule_data_size": 4096, 00:25:11.766 "max_io_size": 131072, 00:25:11.766 "io_unit_size": 131072, 00:25:11.766 "max_aq_depth": 128, 00:25:11.766 "num_shared_buffers": 511, 00:25:11.766 "buf_cache_size": 4294967295, 00:25:11.766 "dif_insert_or_strip": false, 00:25:11.766 "zcopy": false, 00:25:11.766 "c2h_success": false, 00:25:11.766 "sock_priority": 0, 00:25:11.766 "abort_timeout_sec": 1, 00:25:11.766 "ack_timeout": 0, 00:25:11.766 "data_wr_pool_size": 0 00:25:11.766 } 00:25:11.766 }, 00:25:11.766 { 00:25:11.766 "method": "nvmf_create_subsystem", 00:25:11.766 "params": { 00:25:11.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.766 "allow_any_host": false, 00:25:11.766 "serial_number": "SPDK00000000000001", 00:25:11.767 "model_number": "SPDK bdev Controller", 00:25:11.767 "max_namespaces": 10, 00:25:11.767 "min_cntlid": 1, 00:25:11.767 "max_cntlid": 65519, 00:25:11.767 "ana_reporting": false 00:25:11.767 } 00:25:11.767 }, 00:25:11.767 { 00:25:11.767 "method": "nvmf_subsystem_add_host", 00:25:11.767 "params": { 00:25:11.767 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.767 "host": "nqn.2016-06.io.spdk:host1", 00:25:11.767 "psk": "key0" 00:25:11.767 } 00:25:11.767 }, 00:25:11.767 { 00:25:11.767 "method": "nvmf_subsystem_add_ns", 00:25:11.767 "params": { 00:25:11.767 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.767 "namespace": { 00:25:11.767 "nsid": 1, 00:25:11.767 "bdev_name": "malloc0", 00:25:11.767 "nguid": "CA8E03D71ECA4B3687C165F7F0B5FE63", 00:25:11.767 "uuid": "ca8e03d7-1eca-4b36-87c1-65f7f0b5fe63", 00:25:11.767 "no_auto_visible": false 00:25:11.767 } 00:25:11.767 } 00:25:11.767 }, 00:25:11.767 { 00:25:11.767 "method": "nvmf_subsystem_add_listener", 00:25:11.767 "params": { 00:25:11.767 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.767 "listen_address": { 00:25:11.767 "trtype": "TCP", 00:25:11.767 "adrfam": "IPv4", 00:25:11.767 "traddr": "10.0.0.2", 00:25:11.767 "trsvcid": "4420" 00:25:11.767 }, 00:25:11.767 "secure_channel": true 00:25:11.767 } 00:25:11.767 } 00:25:11.767 ] 00:25:11.767 } 00:25:11.767 ] 00:25:11.767 }' 00:25:11.767 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1764462 00:25:11.767 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1764462 00:25:11.767 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:25:11.767 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1764462 ']' 00:25:11.767 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.767 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:11.767 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.767 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:11.767 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:11.767 [2024-10-13 14:21:15.432939] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:25:11.767 [2024-10-13 14:21:15.432993] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:12.028 [2024-10-13 14:21:15.570020] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:12.028 [2024-10-13 14:21:15.617839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.028 [2024-10-13 14:21:15.636873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:12.028 [2024-10-13 14:21:15.636908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:12.028 [2024-10-13 14:21:15.636915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:12.028 [2024-10-13 14:21:15.636921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:12.028 [2024-10-13 14:21:15.636927] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:12.028 [2024-10-13 14:21:15.637613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.289 [2024-10-13 14:21:15.824467] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.289 [2024-10-13 14:21:15.856418] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:12.289 [2024-10-13 14:21:15.856628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.550 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:12.550 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:12.550 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:12.550 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:12.550 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.812 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.812 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1764634 00:25:12.812 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1764634 /var/tmp/bdevperf.sock 00:25:12.812 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1764634 ']' 00:25:12.812 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:12.812 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:12.812 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:12.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:12.812 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:25:12.812 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:12.812 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.812 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:25:12.812 "subsystems": [ 00:25:12.812 { 00:25:12.812 "subsystem": "keyring", 00:25:12.812 "config": [ 00:25:12.812 { 00:25:12.812 "method": "keyring_file_add_key", 00:25:12.812 "params": { 00:25:12.812 "name": "key0", 00:25:12.812 "path": "/tmp/tmp.TUXgXkn2vk" 00:25:12.812 } 00:25:12.812 } 00:25:12.812 ] 00:25:12.812 }, 00:25:12.812 { 00:25:12.812 "subsystem": "iobuf", 00:25:12.812 "config": [ 00:25:12.812 { 00:25:12.812 "method": "iobuf_set_options", 00:25:12.812 "params": { 00:25:12.812 "small_pool_count": 8192, 00:25:12.812 "large_pool_count": 1024, 00:25:12.812 "small_bufsize": 8192, 00:25:12.812 "large_bufsize": 135168 00:25:12.812 } 00:25:12.812 } 00:25:12.812 ] 00:25:12.812 }, 00:25:12.812 { 00:25:12.812 "subsystem": "sock", 00:25:12.812 "config": [ 00:25:12.812 { 00:25:12.812 "method": "sock_set_default_impl", 00:25:12.812 "params": { 00:25:12.812 "impl_name": "posix" 00:25:12.812 } 00:25:12.812 }, 00:25:12.812 { 00:25:12.812 "method": "sock_impl_set_options", 00:25:12.812 "params": { 00:25:12.812 "impl_name": "ssl", 00:25:12.812 "recv_buf_size": 4096, 00:25:12.812 "send_buf_size": 4096, 00:25:12.812 "enable_recv_pipe": true, 00:25:12.812 "enable_quickack": false, 00:25:12.812 "enable_placement_id": 0, 00:25:12.812 "enable_zerocopy_send_server": true, 00:25:12.812 "enable_zerocopy_send_client": false, 00:25:12.812 "zerocopy_threshold": 0, 00:25:12.812 "tls_version": 0, 00:25:12.812 "enable_ktls": false 00:25:12.812 } 00:25:12.812 }, 00:25:12.812 { 00:25:12.812 "method": "sock_impl_set_options", 00:25:12.812 "params": { 00:25:12.812 "impl_name": "posix", 00:25:12.812 "recv_buf_size": 2097152, 00:25:12.812 "send_buf_size": 2097152, 00:25:12.812 "enable_recv_pipe": true, 00:25:12.812 "enable_quickack": false, 00:25:12.812 "enable_placement_id": 0, 00:25:12.812 "enable_zerocopy_send_server": true, 00:25:12.812 "enable_zerocopy_send_client": false, 00:25:12.812 "zerocopy_threshold": 0, 00:25:12.812 "tls_version": 0, 00:25:12.812 "enable_ktls": false 00:25:12.812 } 00:25:12.812 } 00:25:12.812 ] 00:25:12.812 }, 00:25:12.812 { 00:25:12.812 "subsystem": "vmd", 00:25:12.812 "config": [] 00:25:12.812 }, 00:25:12.812 { 00:25:12.812 "subsystem": "accel", 00:25:12.812 "config": [ 00:25:12.812 { 00:25:12.812 "method": "accel_set_options", 00:25:12.812 "params": { 00:25:12.812 "small_cache_size": 128, 00:25:12.812 "large_cache_size": 16, 00:25:12.812 "task_count": 2048, 00:25:12.812 "sequence_count": 2048, 00:25:12.812 "buf_count": 2048 00:25:12.812 } 00:25:12.812 } 00:25:12.812 ] 00:25:12.812 }, 00:25:12.812 { 00:25:12.812 "subsystem": "bdev", 00:25:12.812 "config": [ 00:25:12.812 { 00:25:12.812 "method": "bdev_set_options", 00:25:12.812 "params": { 00:25:12.812 "bdev_io_pool_size": 65535, 00:25:12.812 "bdev_io_cache_size": 256, 00:25:12.812 "bdev_auto_examine": true, 00:25:12.812 "iobuf_small_cache_size": 128, 00:25:12.812 "iobuf_large_cache_size": 16 00:25:12.812 } 00:25:12.812 }, 00:25:12.812 { 00:25:12.812 "method": "bdev_raid_set_options", 00:25:12.812 "params": { 00:25:12.812 "process_window_size_kb": 1024, 00:25:12.812 "process_max_bandwidth_mb_sec": 0 00:25:12.812 } 00:25:12.812 }, 00:25:12.812 { 00:25:12.812 "method": "bdev_iscsi_set_options", 00:25:12.812 "params": { 00:25:12.812 "timeout_sec": 30 00:25:12.812 } 00:25:12.812 }, 00:25:12.812 { 00:25:12.812 "method": "bdev_nvme_set_options", 00:25:12.812 "params": { 00:25:12.812 "action_on_timeout": "none", 00:25:12.812 "timeout_us": 0, 00:25:12.812 "timeout_admin_us": 0, 00:25:12.812 "keep_alive_timeout_ms": 10000, 00:25:12.812 "arbitration_burst": 0, 00:25:12.812 "low_priority_weight": 0, 00:25:12.812 "medium_priority_weight": 0, 00:25:12.812 "high_priority_weight": 0, 00:25:12.812 "nvme_adminq_poll_period_us": 10000, 00:25:12.812 "nvme_ioq_poll_period_us": 0, 00:25:12.812 "io_queue_requests": 512, 00:25:12.812 "delay_cmd_submit": true, 00:25:12.812 "transport_retry_count": 4, 00:25:12.812 "bdev_retry_count": 3, 00:25:12.812 "transport_ack_timeout": 0, 00:25:12.812 "ctrlr_loss_timeout_sec": 0, 00:25:12.812 "reconnect_delay_sec": 0, 00:25:12.812 "fast_io_fail_timeout_sec": 0, 00:25:12.812 "disable_auto_failback": false, 00:25:12.812 "generate_uuids": false, 00:25:12.812 "transport_tos": 0, 00:25:12.812 "nvme_error_stat": false, 00:25:12.812 "rdma_srq_size": 0, 00:25:12.812 "io_path_stat": false, 00:25:12.812 "allow_accel_sequence": false, 00:25:12.812 "rdma_max_cq_size": 0, 00:25:12.812 "rdma_cm_event_timeout_ms": 0, 00:25:12.812 "dhchap_digests": [ 00:25:12.812 "sha256", 00:25:12.812 "sha384", 00:25:12.812 "sha512" 00:25:12.812 ], 00:25:12.812 "dhchap_dhgroups": [ 00:25:12.812 "null", 00:25:12.812 "ffdhe2048", 00:25:12.812 "ffdhe3072", 00:25:12.812 "ffdhe4096", 00:25:12.812 "ffdhe6144", 00:25:12.812 "ffdhe8192" 00:25:12.812 ] 00:25:12.812 } 00:25:12.812 }, 00:25:12.812 { 00:25:12.812 "method": "bdev_nvme_attach_controller", 00:25:12.812 "params": { 00:25:12.812 "name": "TLSTEST", 00:25:12.812 "trtype": "TCP", 00:25:12.812 "adrfam": "IPv4", 00:25:12.812 "traddr": "10.0.0.2", 00:25:12.812 "trsvcid": "4420", 00:25:12.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.812 "prchk_reftag": false, 00:25:12.812 "prchk_guard": false, 00:25:12.812 "ctrlr_loss_timeout_sec": 0, 00:25:12.812 "reconnect_delay_sec": 0, 00:25:12.813 "fast_io_fail_timeout_sec": 0, 00:25:12.813 "psk": "key0", 00:25:12.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:12.813 "hdgst": false, 00:25:12.813 "ddgst": false, 00:25:12.813 "multipath": "multipath" 00:25:12.813 } 00:25:12.813 }, 00:25:12.813 { 00:25:12.813 "method": "bdev_nvme_set_hotplug", 00:25:12.813 "params": { 00:25:12.813 "period_us": 100000, 00:25:12.813 "enable": false 00:25:12.813 } 00:25:12.813 }, 00:25:12.813 { 00:25:12.813 "method": "bdev_wait_for_examine" 00:25:12.813 } 00:25:12.813 ] 00:25:12.813 }, 00:25:12.813 { 00:25:12.813 "subsystem": "nbd", 00:25:12.813 "config": [] 00:25:12.813 } 00:25:12.813 ] 00:25:12.813 }' 00:25:12.813 [2024-10-13 14:21:16.327362] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:25:12.813 [2024-10-13 14:21:16.327417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1764634 ] 00:25:12.813 [2024-10-13 14:21:16.459002] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:12.813 [2024-10-13 14:21:16.506756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.074 [2024-10-13 14:21:16.522987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.074 [2024-10-13 14:21:16.651412] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:13.645 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:13.645 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:13.645 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:13.645 Running I/O for 10 seconds... 00:25:15.526 6040.00 IOPS, 23.59 MiB/s [2024-10-13T12:21:20.616Z] 5849.00 IOPS, 22.85 MiB/s [2024-10-13T12:21:21.557Z] 5778.00 IOPS, 22.57 MiB/s [2024-10-13T12:21:22.499Z] 5700.75 IOPS, 22.27 MiB/s [2024-10-13T12:21:23.440Z] 5669.20 IOPS, 22.15 MiB/s [2024-10-13T12:21:24.382Z] 5644.33 IOPS, 22.05 MiB/s [2024-10-13T12:21:25.325Z] 5621.71 IOPS, 21.96 MiB/s [2024-10-13T12:21:26.267Z] 5565.50 IOPS, 21.74 MiB/s [2024-10-13T12:21:27.652Z] 5545.78 IOPS, 21.66 MiB/s [2024-10-13T12:21:27.652Z] 5538.70 IOPS, 21.64 MiB/s 00:25:23.945 Latency(us) 00:25:23.945 [2024-10-13T12:21:27.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.945 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:23.945 Verification LBA range: start 0x0 length 0x2000 00:25:23.945 TLSTESTn1 : 10.03 5534.07 21.62 0.00 0.00 23080.31 4954.07 38099.78 00:25:23.945 [2024-10-13T12:21:27.652Z] =================================================================================================================== 00:25:23.945 [2024-10-13T12:21:27.652Z] Total : 5534.07 21.62 0.00 0.00 23080.31 4954.07 38099.78 00:25:23.945 { 00:25:23.945 "results": [ 00:25:23.945 { 00:25:23.945 "job": "TLSTESTn1", 00:25:23.945 "core_mask": "0x4", 00:25:23.945 "workload": "verify", 00:25:23.945 "status": "finished", 00:25:23.945 "verify_range": { 00:25:23.945 "start": 0, 00:25:23.945 "length": 8192 00:25:23.945 }, 00:25:23.945 "queue_depth": 128, 00:25:23.945 "io_size": 4096, 00:25:23.945 "runtime": 10.031503, 00:25:23.945 "iops": 5534.066031780083, 00:25:23.945 "mibps": 21.61744543664095, 00:25:23.945 "io_failed": 0, 00:25:23.945 "io_timeout": 0, 00:25:23.945 "avg_latency_us": 23080.308810984978, 00:25:23.945 "min_latency_us": 4954.066154360174, 00:25:23.945 "max_latency_us": 38099.77948546609 00:25:23.946 } 00:25:23.946 ], 00:25:23.946 "core_count": 1 00:25:23.946 } 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1764634 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1764634 ']' 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1764634 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1764634 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1764634' 00:25:23.946 killing process with pid 1764634 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1764634 00:25:23.946 Received shutdown signal, test time was about 10.000000 seconds 00:25:23.946 00:25:23.946 Latency(us) 00:25:23.946 [2024-10-13T12:21:27.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.946 [2024-10-13T12:21:27.653Z] =================================================================================================================== 00:25:23.946 [2024-10-13T12:21:27.653Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1764634 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1764462 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1764462 ']' 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1764462 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1764462 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1764462' 00:25:23.946 killing process with pid 1764462 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1764462 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1764462 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1766832 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1766832 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1766832 ']' 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:23.946 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:24.207 [2024-10-13 14:21:27.656953] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:25:24.207 [2024-10-13 14:21:27.657010] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.207 [2024-10-13 14:21:27.795096] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:24.207 [2024-10-13 14:21:27.843948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.207 [2024-10-13 14:21:27.869417] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.207 [2024-10-13 14:21:27.869467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.207 [2024-10-13 14:21:27.869476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.207 [2024-10-13 14:21:27.869483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.207 [2024-10-13 14:21:27.869489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.207 [2024-10-13 14:21:27.870290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.777 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:24.777 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:24.777 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:24.777 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:24.777 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:25.038 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.038 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.TUXgXkn2vk 00:25:25.038 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TUXgXkn2vk 00:25:25.038 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:25.038 [2024-10-13 14:21:28.677623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.038 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:25.298 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:25.559 [2024-10-13 14:21:29.053698] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:25.559 [2024-10-13 14:21:29.053993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.559 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:25.559 malloc0 00:25:25.819 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:25.819 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TUXgXkn2vk 00:25:26.080 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:26.343 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1767199 00:25:26.343 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:26.343 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:26.343 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1767199 /var/tmp/bdevperf.sock 00:25:26.343 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1767199 ']' 00:25:26.343 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:26.343 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:26.343 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:26.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:26.343 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:26.343 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:26.343 [2024-10-13 14:21:29.907028] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:25:26.343 [2024-10-13 14:21:29.907117] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1767199 ] 00:25:26.343 [2024-10-13 14:21:30.042514] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:26.605 [2024-10-13 14:21:30.089798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.605 [2024-10-13 14:21:30.112240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.177 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:27.177 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:27.177 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TUXgXkn2vk 00:25:27.437 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:27.437 [2024-10-13 14:21:31.066412] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:27.698 nvme0n1 00:25:27.698 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:27.698 Running I/O for 1 seconds... 00:25:28.643 4406.00 IOPS, 17.21 MiB/s 00:25:28.643 Latency(us) 00:25:28.643 [2024-10-13T12:21:32.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.643 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:28.643 Verification LBA range: start 0x0 length 0x2000 00:25:28.643 nvme0n1 : 1.02 4459.97 17.42 0.00 0.00 28462.20 4652.99 35910.14 00:25:28.643 [2024-10-13T12:21:32.350Z] =================================================================================================================== 00:25:28.643 [2024-10-13T12:21:32.350Z] Total : 4459.97 17.42 0.00 0.00 28462.20 4652.99 35910.14 00:25:28.643 { 00:25:28.643 "results": [ 00:25:28.643 { 00:25:28.643 "job": "nvme0n1", 00:25:28.643 "core_mask": "0x2", 00:25:28.643 "workload": "verify", 00:25:28.643 "status": "finished", 00:25:28.643 "verify_range": { 00:25:28.643 "start": 0, 00:25:28.643 "length": 8192 00:25:28.643 }, 00:25:28.643 "queue_depth": 128, 00:25:28.643 "io_size": 4096, 00:25:28.643 "runtime": 1.016823, 00:25:28.643 "iops": 4459.969925935979, 00:25:28.643 "mibps": 17.421757523187416, 00:25:28.643 "io_failed": 0, 00:25:28.643 "io_timeout": 0, 00:25:28.643 "avg_latency_us": 28462.198993535447, 00:25:28.643 "min_latency_us": 4652.990310725025, 00:25:28.643 "max_latency_us": 35910.13698630137 00:25:28.643 } 00:25:28.643 ], 00:25:28.643 "core_count": 1 00:25:28.643 } 00:25:28.643 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1767199 00:25:28.643 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1767199 ']' 00:25:28.643 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1767199 00:25:28.643 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:28.643 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:28.643 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1767199 00:25:28.966 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:28.966 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:28.966 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1767199' 00:25:28.966 killing process with pid 1767199 00:25:28.966 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1767199 00:25:28.966 Received shutdown signal, test time was about 1.000000 seconds 00:25:28.966 00:25:28.966 Latency(us) 00:25:28.966 [2024-10-13T12:21:32.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.966 [2024-10-13T12:21:32.673Z] =================================================================================================================== 00:25:28.966 [2024-10-13T12:21:32.673Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:28.966 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1767199 00:25:28.966 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1766832 00:25:28.966 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1766832 ']' 00:25:28.966 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1766832 00:25:28.966 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:28.966 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:28.966 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1766832 00:25:28.966 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:28.966 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:28.966 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1766832' 00:25:28.966 killing process with pid 1766832 00:25:28.966 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1766832 00:25:28.966 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1766832 00:25:29.259 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:25:29.259 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:29.259 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:29.259 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:29.259 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1767880 00:25:29.260 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1767880 00:25:29.260 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:29.260 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1767880 ']' 00:25:29.260 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.260 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:29.260 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.260 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:29.260 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:29.260 [2024-10-13 14:21:32.716812] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:25:29.260 [2024-10-13 14:21:32.716873] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.260 [2024-10-13 14:21:32.855969] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:29.260 [2024-10-13 14:21:32.906229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.260 [2024-10-13 14:21:32.931088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.260 [2024-10-13 14:21:32.931138] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.260 [2024-10-13 14:21:32.931147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.260 [2024-10-13 14:21:32.931154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.260 [2024-10-13 14:21:32.931160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.260 [2024-10-13 14:21:32.931980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.832 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:29.832 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:29.832 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:29.832 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:29.832 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:30.093 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.093 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:25:30.093 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.093 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:30.093 [2024-10-13 14:21:33.587297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.093 malloc0 00:25:30.093 [2024-10-13 14:21:33.617509] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:30.093 [2024-10-13 14:21:33.617849] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.093 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.093 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1767963 00:25:30.093 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1767963 /var/tmp/bdevperf.sock 00:25:30.093 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:30.093 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1767963 ']' 00:25:30.093 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:30.093 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:30.093 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:30.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:30.093 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:30.093 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:30.093 [2024-10-13 14:21:33.700466] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:25:30.093 [2024-10-13 14:21:33.700524] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1767963 ] 00:25:30.354 [2024-10-13 14:21:33.834841] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:30.354 [2024-10-13 14:21:33.880957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.354 [2024-10-13 14:21:33.900438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.926 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:30.926 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:30.926 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TUXgXkn2vk 00:25:31.187 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:31.187 [2024-10-13 14:21:34.847057] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:31.447 nvme0n1 00:25:31.447 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:31.447 Running I/O for 1 seconds... 00:25:32.389 5497.00 IOPS, 21.47 MiB/s 00:25:32.389 Latency(us) 00:25:32.389 [2024-10-13T12:21:36.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.389 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:32.389 Verification LBA range: start 0x0 length 0x2000 00:25:32.389 nvme0n1 : 1.01 5542.94 21.65 0.00 0.00 22939.77 6048.89 27480.01 00:25:32.389 [2024-10-13T12:21:36.096Z] =================================================================================================================== 00:25:32.389 [2024-10-13T12:21:36.096Z] Total : 5542.94 21.65 0.00 0.00 22939.77 6048.89 27480.01 00:25:32.389 { 00:25:32.389 "results": [ 00:25:32.389 { 00:25:32.389 "job": "nvme0n1", 00:25:32.389 "core_mask": "0x2", 00:25:32.389 "workload": "verify", 00:25:32.389 "status": "finished", 00:25:32.389 "verify_range": { 00:25:32.389 "start": 0, 00:25:32.389 "length": 8192 00:25:32.390 }, 00:25:32.390 "queue_depth": 128, 00:25:32.390 "io_size": 4096, 00:25:32.390 "runtime": 1.014985, 00:25:32.390 "iops": 5542.9390582126825, 00:25:32.390 "mibps": 21.65210569614329, 00:25:32.390 "io_failed": 0, 00:25:32.390 "io_timeout": 0, 00:25:32.390 "avg_latency_us": 22939.77203117263, 00:25:32.390 "min_latency_us": 6048.887403942533, 00:25:32.390 "max_latency_us": 27480.013364517206 00:25:32.390 } 00:25:32.390 ], 00:25:32.390 "core_count": 1 00:25:32.390 } 00:25:32.390 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:25:32.390 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.390 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:32.650 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.650 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:25:32.650 "subsystems": [ 00:25:32.650 { 00:25:32.650 "subsystem": "keyring", 00:25:32.650 "config": [ 00:25:32.650 { 00:25:32.650 "method": "keyring_file_add_key", 00:25:32.650 "params": { 00:25:32.650 "name": "key0", 00:25:32.650 "path": "/tmp/tmp.TUXgXkn2vk" 00:25:32.650 } 00:25:32.650 } 00:25:32.650 ] 00:25:32.650 }, 00:25:32.650 { 00:25:32.650 "subsystem": "iobuf", 00:25:32.650 "config": [ 00:25:32.650 { 00:25:32.650 "method": "iobuf_set_options", 00:25:32.650 "params": { 00:25:32.650 "small_pool_count": 8192, 00:25:32.650 "large_pool_count": 1024, 00:25:32.650 "small_bufsize": 8192, 00:25:32.650 "large_bufsize": 135168 00:25:32.650 } 00:25:32.650 } 00:25:32.650 ] 00:25:32.650 }, 00:25:32.650 { 00:25:32.650 "subsystem": "sock", 00:25:32.650 "config": [ 00:25:32.650 { 00:25:32.650 "method": "sock_set_default_impl", 00:25:32.650 "params": { 00:25:32.650 "impl_name": "posix" 00:25:32.650 } 00:25:32.650 }, 00:25:32.650 { 00:25:32.650 "method": "sock_impl_set_options", 00:25:32.650 "params": { 00:25:32.650 "impl_name": "ssl", 00:25:32.650 "recv_buf_size": 4096, 00:25:32.650 "send_buf_size": 4096, 00:25:32.650 "enable_recv_pipe": true, 00:25:32.650 "enable_quickack": false, 00:25:32.650 "enable_placement_id": 0, 00:25:32.650 "enable_zerocopy_send_server": true, 00:25:32.650 "enable_zerocopy_send_client": false, 00:25:32.650 "zerocopy_threshold": 0, 00:25:32.650 "tls_version": 0, 00:25:32.650 "enable_ktls": false 00:25:32.650 } 00:25:32.650 }, 00:25:32.650 { 00:25:32.650 "method": "sock_impl_set_options", 00:25:32.650 "params": { 00:25:32.650 "impl_name": "posix", 00:25:32.650 "recv_buf_size": 2097152, 00:25:32.650 "send_buf_size": 2097152, 00:25:32.650 "enable_recv_pipe": true, 00:25:32.651 "enable_quickack": false, 00:25:32.651 "enable_placement_id": 0, 00:25:32.651 "enable_zerocopy_send_server": true, 00:25:32.651 "enable_zerocopy_send_client": false, 00:25:32.651 "zerocopy_threshold": 0, 00:25:32.651 "tls_version": 0, 00:25:32.651 "enable_ktls": false 00:25:32.651 } 00:25:32.651 } 00:25:32.651 ] 00:25:32.651 }, 00:25:32.651 { 00:25:32.651 "subsystem": "vmd", 00:25:32.651 "config": [] 00:25:32.651 }, 00:25:32.651 { 00:25:32.651 "subsystem": "accel", 00:25:32.651 "config": [ 00:25:32.651 { 00:25:32.651 "method": "accel_set_options", 00:25:32.651 "params": { 00:25:32.651 "small_cache_size": 128, 00:25:32.651 "large_cache_size": 16, 00:25:32.651 "task_count": 2048, 00:25:32.651 "sequence_count": 2048, 00:25:32.651 "buf_count": 2048 00:25:32.651 } 00:25:32.651 } 00:25:32.651 ] 00:25:32.651 }, 00:25:32.651 { 00:25:32.651 "subsystem": "bdev", 00:25:32.651 "config": [ 00:25:32.651 { 00:25:32.651 "method": "bdev_set_options", 00:25:32.651 "params": { 00:25:32.651 "bdev_io_pool_size": 65535, 00:25:32.651 "bdev_io_cache_size": 256, 00:25:32.651 "bdev_auto_examine": true, 00:25:32.651 "iobuf_small_cache_size": 128, 00:25:32.651 "iobuf_large_cache_size": 16 00:25:32.651 } 00:25:32.651 }, 00:25:32.651 { 00:25:32.651 "method": "bdev_raid_set_options", 00:25:32.651 "params": { 00:25:32.651 "process_window_size_kb": 1024, 00:25:32.651 "process_max_bandwidth_mb_sec": 0 00:25:32.651 } 00:25:32.651 }, 00:25:32.651 { 00:25:32.651 "method": "bdev_iscsi_set_options", 00:25:32.651 "params": { 00:25:32.651 "timeout_sec": 30 00:25:32.651 } 00:25:32.651 }, 00:25:32.651 { 00:25:32.651 "method": "bdev_nvme_set_options", 00:25:32.651 "params": { 00:25:32.651 "action_on_timeout": "none", 00:25:32.651 "timeout_us": 0, 00:25:32.651 "timeout_admin_us": 0, 00:25:32.651 "keep_alive_timeout_ms": 10000, 00:25:32.651 "arbitration_burst": 0, 00:25:32.651 "low_priority_weight": 0, 00:25:32.651 "medium_priority_weight": 0, 00:25:32.651 "high_priority_weight": 0, 00:25:32.651 "nvme_adminq_poll_period_us": 10000, 00:25:32.651 "nvme_ioq_poll_period_us": 0, 00:25:32.651 "io_queue_requests": 0, 00:25:32.651 "delay_cmd_submit": true, 00:25:32.651 "transport_retry_count": 4, 00:25:32.651 "bdev_retry_count": 3, 00:25:32.651 "transport_ack_timeout": 0, 00:25:32.651 "ctrlr_loss_timeout_sec": 0, 00:25:32.651 "reconnect_delay_sec": 0, 00:25:32.651 "fast_io_fail_timeout_sec": 0, 00:25:32.651 "disable_auto_failback": false, 00:25:32.651 "generate_uuids": false, 00:25:32.651 "transport_tos": 0, 00:25:32.651 "nvme_error_stat": false, 00:25:32.651 "rdma_srq_size": 0, 00:25:32.651 "io_path_stat": false, 00:25:32.651 "allow_accel_sequence": false, 00:25:32.651 "rdma_max_cq_size": 0, 00:25:32.651 "rdma_cm_event_timeout_ms": 0, 00:25:32.651 "dhchap_digests": [ 00:25:32.651 "sha256", 00:25:32.651 "sha384", 00:25:32.651 "sha512" 00:25:32.651 ], 00:25:32.651 "dhchap_dhgroups": [ 00:25:32.651 "null", 00:25:32.651 "ffdhe2048", 00:25:32.651 "ffdhe3072", 00:25:32.651 "ffdhe4096", 00:25:32.651 "ffdhe6144", 00:25:32.651 "ffdhe8192" 00:25:32.651 ] 00:25:32.651 } 00:25:32.651 }, 00:25:32.651 { 00:25:32.651 "method": "bdev_nvme_set_hotplug", 00:25:32.651 "params": { 00:25:32.651 "period_us": 100000, 00:25:32.651 "enable": false 00:25:32.651 } 00:25:32.651 }, 00:25:32.651 { 00:25:32.651 "method": "bdev_malloc_create", 00:25:32.651 "params": { 00:25:32.651 "name": "malloc0", 00:25:32.651 "num_blocks": 8192, 00:25:32.651 "block_size": 4096, 00:25:32.651 "physical_block_size": 4096, 00:25:32.651 "uuid": "c70144b7-ddd1-4956-a0db-0b617727e2e7", 00:25:32.651 "optimal_io_boundary": 0, 00:25:32.651 "md_size": 0, 00:25:32.651 "dif_type": 0, 00:25:32.651 "dif_is_head_of_md": false, 00:25:32.651 "dif_pi_format": 0 00:25:32.651 } 00:25:32.651 }, 00:25:32.651 { 00:25:32.651 "method": "bdev_wait_for_examine" 00:25:32.651 } 00:25:32.651 ] 00:25:32.651 }, 00:25:32.651 { 00:25:32.651 "subsystem": "nbd", 00:25:32.651 "config": [] 00:25:32.651 }, 00:25:32.651 { 00:25:32.651 "subsystem": "scheduler", 00:25:32.651 "config": [ 00:25:32.651 { 00:25:32.651 "method": "framework_set_scheduler", 00:25:32.651 "params": { 00:25:32.651 "name": "static" 00:25:32.651 } 00:25:32.651 } 00:25:32.651 ] 00:25:32.651 }, 00:25:32.651 { 00:25:32.651 "subsystem": "nvmf", 00:25:32.651 "config": [ 00:25:32.651 { 00:25:32.651 "method": "nvmf_set_config", 00:25:32.651 "params": { 00:25:32.651 "discovery_filter": "match_any", 00:25:32.651 "admin_cmd_passthru": { 00:25:32.651 "identify_ctrlr": false 00:25:32.651 }, 00:25:32.651 "dhchap_digests": [ 00:25:32.651 "sha256", 00:25:32.651 "sha384", 00:25:32.651 "sha512" 00:25:32.651 ], 00:25:32.651 "dhchap_dhgroups": [ 00:25:32.651 "null", 00:25:32.651 "ffdhe2048", 00:25:32.651 "ffdhe3072", 00:25:32.651 "ffdhe4096", 00:25:32.651 "ffdhe6144", 00:25:32.651 "ffdhe8192" 00:25:32.651 ] 00:25:32.651 } 00:25:32.651 }, 00:25:32.651 { 00:25:32.651 "method": "nvmf_set_max_subsystems", 00:25:32.651 "params": { 00:25:32.651 "max_subsystems": 1024 00:25:32.651 } 00:25:32.651 }, 00:25:32.651 { 00:25:32.651 "method": "nvmf_set_crdt", 00:25:32.651 "params": { 00:25:32.651 "crdt1": 0, 00:25:32.651 "crdt2": 0, 00:25:32.651 "crdt3": 0 00:25:32.651 } 00:25:32.651 }, 00:25:32.651 { 00:25:32.651 "method": "nvmf_create_transport", 00:25:32.651 "params": { 00:25:32.651 "trtype": "TCP", 00:25:32.651 "max_queue_depth": 128, 00:25:32.651 "max_io_qpairs_per_ctrlr": 127, 00:25:32.651 "in_capsule_data_size": 4096, 00:25:32.651 "max_io_size": 131072, 00:25:32.651 "io_unit_size": 131072, 00:25:32.651 "max_aq_depth": 128, 00:25:32.651 "num_shared_buffers": 511, 00:25:32.651 "buf_cache_size": 4294967295, 00:25:32.651 "dif_insert_or_strip": false, 00:25:32.651 "zcopy": false, 00:25:32.651 "c2h_success": false, 00:25:32.651 "sock_priority": 0, 00:25:32.651 "abort_timeout_sec": 1, 00:25:32.651 "ack_timeout": 0, 00:25:32.651 "data_wr_pool_size": 0 00:25:32.651 } 00:25:32.651 }, 00:25:32.651 { 00:25:32.651 "method": "nvmf_create_subsystem", 00:25:32.651 "params": { 00:25:32.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:32.651 "allow_any_host": false, 00:25:32.651 "serial_number": "00000000000000000000", 00:25:32.651 "model_number": "SPDK bdev Controller", 00:25:32.651 "max_namespaces": 32, 00:25:32.651 "min_cntlid": 1, 00:25:32.651 "max_cntlid": 65519, 00:25:32.651 "ana_reporting": false 00:25:32.651 } 00:25:32.651 }, 00:25:32.651 { 00:25:32.651 "method": "nvmf_subsystem_add_host", 00:25:32.651 "params": { 00:25:32.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:32.651 "host": "nqn.2016-06.io.spdk:host1", 00:25:32.651 "psk": "key0" 00:25:32.651 } 00:25:32.651 }, 00:25:32.651 { 00:25:32.651 "method": "nvmf_subsystem_add_ns", 00:25:32.651 "params": { 00:25:32.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:32.651 "namespace": { 00:25:32.651 "nsid": 1, 00:25:32.651 "bdev_name": "malloc0", 00:25:32.651 "nguid": "C70144B7DDD14956A0DB0B617727E2E7", 00:25:32.651 "uuid": "c70144b7-ddd1-4956-a0db-0b617727e2e7", 00:25:32.651 "no_auto_visible": false 00:25:32.651 } 00:25:32.651 } 00:25:32.651 }, 00:25:32.651 { 00:25:32.651 "method": "nvmf_subsystem_add_listener", 00:25:32.651 "params": { 00:25:32.651 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:32.651 "listen_address": { 00:25:32.651 "trtype": "TCP", 00:25:32.651 "adrfam": "IPv4", 00:25:32.651 "traddr": "10.0.0.2", 00:25:32.651 "trsvcid": "4420" 00:25:32.651 }, 00:25:32.651 "secure_channel": false, 00:25:32.651 "sock_impl": "ssl" 00:25:32.651 } 00:25:32.651 } 00:25:32.651 ] 00:25:32.651 } 00:25:32.651 ] 00:25:32.651 }' 00:25:32.651 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:32.912 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:25:32.912 "subsystems": [ 00:25:32.912 { 00:25:32.912 "subsystem": "keyring", 00:25:32.912 "config": [ 00:25:32.912 { 00:25:32.912 "method": "keyring_file_add_key", 00:25:32.912 "params": { 00:25:32.912 "name": "key0", 00:25:32.912 "path": "/tmp/tmp.TUXgXkn2vk" 00:25:32.912 } 00:25:32.912 } 00:25:32.912 ] 00:25:32.912 }, 00:25:32.912 { 00:25:32.912 "subsystem": "iobuf", 00:25:32.912 "config": [ 00:25:32.912 { 00:25:32.912 "method": "iobuf_set_options", 00:25:32.912 "params": { 00:25:32.912 "small_pool_count": 8192, 00:25:32.912 "large_pool_count": 1024, 00:25:32.912 "small_bufsize": 8192, 00:25:32.912 "large_bufsize": 135168 00:25:32.912 } 00:25:32.912 } 00:25:32.912 ] 00:25:32.912 }, 00:25:32.912 { 00:25:32.912 "subsystem": "sock", 00:25:32.912 "config": [ 00:25:32.912 { 00:25:32.912 "method": "sock_set_default_impl", 00:25:32.912 "params": { 00:25:32.912 "impl_name": "posix" 00:25:32.912 } 00:25:32.912 }, 00:25:32.912 { 00:25:32.912 "method": "sock_impl_set_options", 00:25:32.912 "params": { 00:25:32.912 "impl_name": "ssl", 00:25:32.912 "recv_buf_size": 4096, 00:25:32.912 "send_buf_size": 4096, 00:25:32.912 "enable_recv_pipe": true, 00:25:32.912 "enable_quickack": false, 00:25:32.912 "enable_placement_id": 0, 00:25:32.912 "enable_zerocopy_send_server": true, 00:25:32.912 "enable_zerocopy_send_client": false, 00:25:32.912 "zerocopy_threshold": 0, 00:25:32.912 "tls_version": 0, 00:25:32.912 "enable_ktls": false 00:25:32.912 } 00:25:32.912 }, 00:25:32.912 { 00:25:32.912 "method": "sock_impl_set_options", 00:25:32.912 "params": { 00:25:32.912 "impl_name": "posix", 00:25:32.912 "recv_buf_size": 2097152, 00:25:32.912 "send_buf_size": 2097152, 00:25:32.912 "enable_recv_pipe": true, 00:25:32.912 "enable_quickack": false, 00:25:32.912 "enable_placement_id": 0, 00:25:32.912 "enable_zerocopy_send_server": true, 00:25:32.912 "enable_zerocopy_send_client": false, 00:25:32.912 "zerocopy_threshold": 0, 00:25:32.912 "tls_version": 0, 00:25:32.912 "enable_ktls": false 00:25:32.912 } 00:25:32.912 } 00:25:32.912 ] 00:25:32.912 }, 00:25:32.912 { 00:25:32.912 "subsystem": "vmd", 00:25:32.912 "config": [] 00:25:32.912 }, 00:25:32.912 { 00:25:32.912 "subsystem": "accel", 00:25:32.912 "config": [ 00:25:32.912 { 00:25:32.912 "method": "accel_set_options", 00:25:32.912 "params": { 00:25:32.912 "small_cache_size": 128, 00:25:32.912 "large_cache_size": 16, 00:25:32.912 "task_count": 2048, 00:25:32.912 "sequence_count": 2048, 00:25:32.912 "buf_count": 2048 00:25:32.912 } 00:25:32.912 } 00:25:32.912 ] 00:25:32.912 }, 00:25:32.912 { 00:25:32.912 "subsystem": "bdev", 00:25:32.912 "config": [ 00:25:32.912 { 00:25:32.912 "method": "bdev_set_options", 00:25:32.912 "params": { 00:25:32.912 "bdev_io_pool_size": 65535, 00:25:32.912 "bdev_io_cache_size": 256, 00:25:32.912 "bdev_auto_examine": true, 00:25:32.912 "iobuf_small_cache_size": 128, 00:25:32.912 "iobuf_large_cache_size": 16 00:25:32.912 } 00:25:32.912 }, 00:25:32.912 { 00:25:32.912 "method": "bdev_raid_set_options", 00:25:32.912 "params": { 00:25:32.912 "process_window_size_kb": 1024, 00:25:32.912 "process_max_bandwidth_mb_sec": 0 00:25:32.912 } 00:25:32.912 }, 00:25:32.912 { 00:25:32.912 "method": "bdev_iscsi_set_options", 00:25:32.912 "params": { 00:25:32.912 "timeout_sec": 30 00:25:32.912 } 00:25:32.913 }, 00:25:32.913 { 00:25:32.913 "method": "bdev_nvme_set_options", 00:25:32.913 "params": { 00:25:32.913 "action_on_timeout": "none", 00:25:32.913 "timeout_us": 0, 00:25:32.913 "timeout_admin_us": 0, 00:25:32.913 "keep_alive_timeout_ms": 10000, 00:25:32.913 "arbitration_burst": 0, 00:25:32.913 "low_priority_weight": 0, 00:25:32.913 "medium_priority_weight": 0, 00:25:32.913 "high_priority_weight": 0, 00:25:32.913 "nvme_adminq_poll_period_us": 10000, 00:25:32.913 "nvme_ioq_poll_period_us": 0, 00:25:32.913 "io_queue_requests": 512, 00:25:32.913 "delay_cmd_submit": true, 00:25:32.913 "transport_retry_count": 4, 00:25:32.913 "bdev_retry_count": 3, 00:25:32.913 "transport_ack_timeout": 0, 00:25:32.913 "ctrlr_loss_timeout_sec": 0, 00:25:32.913 "reconnect_delay_sec": 0, 00:25:32.913 "fast_io_fail_timeout_sec": 0, 00:25:32.913 "disable_auto_failback": false, 00:25:32.913 "generate_uuids": false, 00:25:32.913 "transport_tos": 0, 00:25:32.913 "nvme_error_stat": false, 00:25:32.913 "rdma_srq_size": 0, 00:25:32.913 "io_path_stat": false, 00:25:32.913 "allow_accel_sequence": false, 00:25:32.913 "rdma_max_cq_size": 0, 00:25:32.913 "rdma_cm_event_timeout_ms": 0, 00:25:32.913 "dhchap_digests": [ 00:25:32.913 "sha256", 00:25:32.913 "sha384", 00:25:32.913 "sha512" 00:25:32.913 ], 00:25:32.913 "dhchap_dhgroups": [ 00:25:32.913 "null", 00:25:32.913 "ffdhe2048", 00:25:32.913 "ffdhe3072", 00:25:32.913 "ffdhe4096", 00:25:32.913 "ffdhe6144", 00:25:32.913 "ffdhe8192" 00:25:32.913 ] 00:25:32.913 } 00:25:32.913 }, 00:25:32.913 { 00:25:32.913 "method": "bdev_nvme_attach_controller", 00:25:32.913 "params": { 00:25:32.913 "name": "nvme0", 00:25:32.913 "trtype": "TCP", 00:25:32.913 "adrfam": "IPv4", 00:25:32.913 "traddr": "10.0.0.2", 00:25:32.913 "trsvcid": "4420", 00:25:32.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:32.913 "prchk_reftag": false, 00:25:32.913 "prchk_guard": false, 00:25:32.913 "ctrlr_loss_timeout_sec": 0, 00:25:32.913 "reconnect_delay_sec": 0, 00:25:32.913 "fast_io_fail_timeout_sec": 0, 00:25:32.913 "psk": "key0", 00:25:32.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:32.913 "hdgst": false, 00:25:32.913 "ddgst": false, 00:25:32.913 "multipath": "multipath" 00:25:32.913 } 00:25:32.913 }, 00:25:32.913 { 00:25:32.913 "method": "bdev_nvme_set_hotplug", 00:25:32.913 "params": { 00:25:32.913 "period_us": 100000, 00:25:32.913 "enable": false 00:25:32.913 } 00:25:32.913 }, 00:25:32.913 { 00:25:32.913 "method": "bdev_enable_histogram", 00:25:32.913 "params": { 00:25:32.913 "name": "nvme0n1", 00:25:32.913 "enable": true 00:25:32.913 } 00:25:32.913 }, 00:25:32.913 { 00:25:32.913 "method": "bdev_wait_for_examine" 00:25:32.913 } 00:25:32.913 ] 00:25:32.913 }, 00:25:32.913 { 00:25:32.913 "subsystem": "nbd", 00:25:32.913 "config": [] 00:25:32.913 } 00:25:32.913 ] 00:25:32.913 }' 00:25:32.913 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1767963 00:25:32.913 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1767963 ']' 00:25:32.913 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1767963 00:25:32.913 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:32.913 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:32.913 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1767963 00:25:32.913 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:32.913 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:32.913 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1767963' 00:25:32.913 killing process with pid 1767963 00:25:32.913 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1767963 00:25:32.913 Received shutdown signal, test time was about 1.000000 seconds 00:25:32.913 00:25:32.913 Latency(us) 00:25:32.913 [2024-10-13T12:21:36.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.913 [2024-10-13T12:21:36.620Z] =================================================================================================================== 00:25:32.913 [2024-10-13T12:21:36.620Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:32.913 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1767963 00:25:32.913 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1767880 00:25:32.913 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1767880 ']' 00:25:32.913 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1767880 00:25:32.913 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:32.913 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:32.913 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1767880 00:25:33.175 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:33.175 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:33.175 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1767880' 00:25:33.175 killing process with pid 1767880 00:25:33.175 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1767880 00:25:33.175 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1767880 00:25:33.175 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:25:33.175 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:33.175 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:33.175 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:25:33.175 "subsystems": [ 00:25:33.175 { 00:25:33.175 "subsystem": "keyring", 00:25:33.175 "config": [ 00:25:33.175 { 00:25:33.175 "method": "keyring_file_add_key", 00:25:33.175 "params": { 00:25:33.175 "name": "key0", 00:25:33.175 "path": "/tmp/tmp.TUXgXkn2vk" 00:25:33.175 } 00:25:33.175 } 00:25:33.175 ] 00:25:33.175 }, 00:25:33.175 { 00:25:33.175 "subsystem": "iobuf", 00:25:33.175 "config": [ 00:25:33.175 { 00:25:33.175 "method": "iobuf_set_options", 00:25:33.175 "params": { 00:25:33.175 "small_pool_count": 8192, 00:25:33.175 "large_pool_count": 1024, 00:25:33.175 "small_bufsize": 8192, 00:25:33.175 "large_bufsize": 135168 00:25:33.175 } 00:25:33.175 } 00:25:33.175 ] 00:25:33.175 }, 00:25:33.175 { 00:25:33.175 "subsystem": "sock", 00:25:33.175 "config": [ 00:25:33.175 { 00:25:33.175 "method": "sock_set_default_impl", 00:25:33.175 "params": { 00:25:33.175 "impl_name": "posix" 00:25:33.175 } 00:25:33.175 }, 00:25:33.175 { 00:25:33.175 "method": "sock_impl_set_options", 00:25:33.175 "params": { 00:25:33.175 "impl_name": "ssl", 00:25:33.175 "recv_buf_size": 4096, 00:25:33.175 "send_buf_size": 4096, 00:25:33.175 "enable_recv_pipe": true, 00:25:33.175 "enable_quickack": false, 00:25:33.175 "enable_placement_id": 0, 00:25:33.175 "enable_zerocopy_send_server": true, 00:25:33.175 "enable_zerocopy_send_client": false, 00:25:33.175 "zerocopy_threshold": 0, 00:25:33.175 "tls_version": 0, 00:25:33.175 "enable_ktls": false 00:25:33.175 } 00:25:33.175 }, 00:25:33.175 { 00:25:33.175 "method": "sock_impl_set_options", 00:25:33.175 "params": { 00:25:33.175 "impl_name": "posix", 00:25:33.175 "recv_buf_size": 2097152, 00:25:33.175 "send_buf_size": 2097152, 00:25:33.175 "enable_recv_pipe": true, 00:25:33.175 "enable_quickack": false, 00:25:33.175 "enable_placement_id": 0, 00:25:33.175 "enable_zerocopy_send_server": true, 00:25:33.175 "enable_zerocopy_send_client": false, 00:25:33.175 "zerocopy_threshold": 0, 00:25:33.175 "tls_version": 0, 00:25:33.175 "enable_ktls": false 00:25:33.175 } 00:25:33.175 } 00:25:33.175 ] 00:25:33.175 }, 00:25:33.175 { 00:25:33.175 "subsystem": "vmd", 00:25:33.175 "config": [] 00:25:33.175 }, 00:25:33.175 { 00:25:33.175 "subsystem": "accel", 00:25:33.175 "config": [ 00:25:33.175 { 00:25:33.175 "method": "accel_set_options", 00:25:33.175 "params": { 00:25:33.175 "small_cache_size": 128, 00:25:33.175 "large_cache_size": 16, 00:25:33.175 "task_count": 2048, 00:25:33.175 "sequence_count": 2048, 00:25:33.175 "buf_count": 2048 00:25:33.175 } 00:25:33.175 } 00:25:33.175 ] 00:25:33.175 }, 00:25:33.175 { 00:25:33.175 "subsystem": "bdev", 00:25:33.175 "config": [ 00:25:33.175 { 00:25:33.175 "method": "bdev_set_options", 00:25:33.175 "params": { 00:25:33.175 "bdev_io_pool_size": 65535, 00:25:33.175 "bdev_io_cache_size": 256, 00:25:33.175 "bdev_auto_examine": true, 00:25:33.175 "iobuf_small_cache_size": 128, 00:25:33.175 "iobuf_large_cache_size": 16 00:25:33.175 } 00:25:33.175 }, 00:25:33.175 { 00:25:33.175 "method": "bdev_raid_set_options", 00:25:33.175 "params": { 00:25:33.175 "process_window_size_kb": 1024, 00:25:33.175 "process_max_bandwidth_mb_sec": 0 00:25:33.175 } 00:25:33.175 }, 00:25:33.175 { 00:25:33.175 "method": "bdev_iscsi_set_options", 00:25:33.175 "params": { 00:25:33.175 "timeout_sec": 30 00:25:33.175 } 00:25:33.175 }, 00:25:33.175 { 00:25:33.175 "method": "bdev_nvme_set_options", 00:25:33.175 "params": { 00:25:33.175 "action_on_timeout": "none", 00:25:33.175 "timeout_us": 0, 00:25:33.175 "timeout_admin_us": 0, 00:25:33.175 "keep_alive_timeout_ms": 10000, 00:25:33.175 "arbitration_burst": 0, 00:25:33.175 "low_priority_weight": 0, 00:25:33.175 "medium_priority_weight": 0, 00:25:33.175 "high_priority_weight": 0, 00:25:33.175 "nvme_adminq_poll_period_us": 10000, 00:25:33.175 "nvme_ioq_poll_period_us": 0, 00:25:33.175 "io_queue_requests": 0, 00:25:33.175 "delay_cmd_submit": true, 00:25:33.175 "transport_retry_count": 4, 00:25:33.175 "bdev_retry_count": 3, 00:25:33.175 "transport_ack_timeout": 0, 00:25:33.175 "ctrlr_loss_timeout_sec": 0, 00:25:33.175 "reconnect_delay_sec": 0, 00:25:33.175 "fast_io_fail_timeout_sec": 0, 00:25:33.175 "disable_auto_failback": false, 00:25:33.175 "generate_uuids": false, 00:25:33.175 "transport_tos": 0, 00:25:33.175 "nvme_error_stat": false, 00:25:33.175 "rdma_srq_size": 0, 00:25:33.175 "io_path_stat": false, 00:25:33.175 "allow_accel_sequence": false, 00:25:33.175 "rdma_max_cq_size": 0, 00:25:33.175 "rdma_cm_event_timeout_ms": 0, 00:25:33.175 "dhchap_digests": [ 00:25:33.175 "sha256", 00:25:33.175 "sha384", 00:25:33.175 "sha512" 00:25:33.175 ], 00:25:33.175 "dhchap_dhgroups": [ 00:25:33.175 "null", 00:25:33.175 "ffdhe2048", 00:25:33.175 "ffdhe3072", 00:25:33.175 "ffdhe4096", 00:25:33.175 "ffdhe6144", 00:25:33.175 "ffdhe8192" 00:25:33.175 ] 00:25:33.175 } 00:25:33.175 }, 00:25:33.175 { 00:25:33.175 "method": "bdev_nvme_set_hotplug", 00:25:33.175 "params": { 00:25:33.175 "period_us": 100000, 00:25:33.175 "enable": false 00:25:33.175 } 00:25:33.175 }, 00:25:33.175 { 00:25:33.175 "method": "bdev_malloc_create", 00:25:33.175 "params": { 00:25:33.175 "name": "malloc0", 00:25:33.175 "num_blocks": 8192, 00:25:33.175 "block_size": 4096, 00:25:33.175 "physical_block_size": 4096, 00:25:33.175 "uuid": "c70144b7-ddd1-4956-a0db-0b617727e2e7", 00:25:33.175 "optimal_io_boundary": 0, 00:25:33.175 "md_size": 0, 00:25:33.175 "dif_type": 0, 00:25:33.175 "dif_is_head_of_md": false, 00:25:33.175 "dif_pi_format": 0 00:25:33.175 } 00:25:33.175 }, 00:25:33.175 { 00:25:33.175 "method": "bdev_wait_for_examine" 00:25:33.175 } 00:25:33.175 ] 00:25:33.175 }, 00:25:33.175 { 00:25:33.175 "subsystem": "nbd", 00:25:33.175 "config": [] 00:25:33.176 }, 00:25:33.176 { 00:25:33.176 "subsystem": "scheduler", 00:25:33.176 "config": [ 00:25:33.176 { 00:25:33.176 "method": "framework_set_scheduler", 00:25:33.176 "params": { 00:25:33.176 "name": "static" 00:25:33.176 } 00:25:33.176 } 00:25:33.176 ] 00:25:33.176 }, 00:25:33.176 { 00:25:33.176 "subsystem": "nvmf", 00:25:33.176 "config": [ 00:25:33.176 { 00:25:33.176 "method": "nvmf_set_config", 00:25:33.176 "params": { 00:25:33.176 "discovery_filter": "match_any", 00:25:33.176 "admin_cmd_passthru": { 00:25:33.176 "identify_ctrlr": false 00:25:33.176 }, 00:25:33.176 "dhchap_digests": [ 00:25:33.176 "sha256", 00:25:33.176 "sha384", 00:25:33.176 "sha512" 00:25:33.176 ], 00:25:33.176 "dhchap_dhgroups": [ 00:25:33.176 "null", 00:25:33.176 "ffdhe2048", 00:25:33.176 "ffdhe3072", 00:25:33.176 "ffdhe4096", 00:25:33.176 "ffdhe6144", 00:25:33.176 "ffdhe8192" 00:25:33.176 ] 00:25:33.176 } 00:25:33.176 }, 00:25:33.176 { 00:25:33.176 "method": "nvmf_set_max_subsystems", 00:25:33.176 "params": { 00:25:33.176 "max_subsystems": 1024 00:25:33.176 } 00:25:33.176 }, 00:25:33.176 { 00:25:33.176 "method": "nvmf_set_crdt", 00:25:33.176 "params": { 00:25:33.176 "crdt1": 0, 00:25:33.176 "crdt2": 0, 00:25:33.176 "crdt3": 0 00:25:33.176 } 00:25:33.176 }, 00:25:33.176 { 00:25:33.176 "method": "nvmf_create_transport", 00:25:33.176 "params": { 00:25:33.176 "trtype": "TCP", 00:25:33.176 "max_queue_depth": 128, 00:25:33.176 "max_io_qpairs_per_ctrlr": 127, 00:25:33.176 "in_capsule_data_size": 4096, 00:25:33.176 "max_io_size": 131072, 00:25:33.176 "io_unit_size": 131072, 00:25:33.176 "max_aq_depth": 128, 00:25:33.176 "num_shared_buffers": 511, 00:25:33.176 "buf_cache_size": 4294967295, 00:25:33.176 "dif_insert_or_strip": false, 00:25:33.176 "zcopy": false, 00:25:33.176 "c2h_success": false, 00:25:33.176 "sock_priority": 0, 00:25:33.176 "abort_timeout_sec": 1, 00:25:33.176 "ack_timeout": 0, 00:25:33.176 "data_wr_pool_size": 0 00:25:33.176 } 00:25:33.176 }, 00:25:33.176 { 00:25:33.176 "method": "nvmf_create_subsystem", 00:25:33.176 "params": { 00:25:33.176 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.176 "allow_any_host": false, 00:25:33.176 "serial_number": "00000000000000000000", 00:25:33.176 "model_number": "SPDK bdev Controller", 00:25:33.176 "max_namespaces": 32, 00:25:33.176 "min_cntlid": 1, 00:25:33.176 "max_cntlid": 65519, 00:25:33.176 "ana_reporting": false 00:25:33.176 } 00:25:33.176 }, 00:25:33.176 { 00:25:33.176 "method": "nvmf_subsystem_add_host", 00:25:33.176 "params": { 00:25:33.176 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.176 "host": "nqn.2016-06.io.spdk:host1", 00:25:33.176 "psk": "key0" 00:25:33.176 } 00:25:33.176 }, 00:25:33.176 { 00:25:33.176 "method": "nvmf_subsystem_add_ns", 00:25:33.176 "params": { 00:25:33.176 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.176 "namespace": { 00:25:33.176 "nsid": 1, 00:25:33.176 "bdev_name": "malloc0", 00:25:33.176 "nguid": "C70144B7DDD14956A0DB0B617727E2E7", 00:25:33.176 "uuid": "c70144b7-ddd1-4956-a0db-0b617727e2e7", 00:25:33.176 "no_auto_visible": false 00:25:33.176 } 00:25:33.176 } 00:25:33.176 }, 00:25:33.176 { 00:25:33.176 "method": "nvmf_subsystem_add_listener", 00:25:33.176 "params": { 00:25:33.176 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.176 "listen_address": { 00:25:33.176 "trtype": "TCP", 00:25:33.176 "adrfam": "IPv4", 00:25:33.176 "traddr": "10.0.0.2", 00:25:33.176 "trsvcid": "4420" 00:25:33.176 }, 00:25:33.176 "secure_channel": false, 00:25:33.176 "sock_impl": "ssl" 00:25:33.176 } 00:25:33.176 } 00:25:33.176 ] 00:25:33.176 } 00:25:33.176 ] 00:25:33.176 }' 00:25:33.176 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:33.176 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1768595 00:25:33.176 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1768595 00:25:33.176 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:33.176 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1768595 ']' 00:25:33.176 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.176 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:33.176 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.176 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:33.176 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:33.176 [2024-10-13 14:21:36.821505] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:25:33.176 [2024-10-13 14:21:36.821558] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.438 [2024-10-13 14:21:36.958357] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:33.438 [2024-10-13 14:21:37.005461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.438 [2024-10-13 14:21:37.021883] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.438 [2024-10-13 14:21:37.021916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.438 [2024-10-13 14:21:37.021921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.438 [2024-10-13 14:21:37.021926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.438 [2024-10-13 14:21:37.021931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.438 [2024-10-13 14:21:37.022483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.699 [2024-10-13 14:21:37.210058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.700 [2024-10-13 14:21:37.242016] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:33.700 [2024-10-13 14:21:37.242232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.960 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:33.960 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:33.960 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:33.960 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:33.960 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:33.960 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.960 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1768908 00:25:33.960 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1768908 /var/tmp/bdevperf.sock 00:25:33.960 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1768908 ']' 00:25:33.960 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:33.960 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:33.960 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:33.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:33.961 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:33.961 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:33.961 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:33.961 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:25:33.961 "subsystems": [ 00:25:33.961 { 00:25:33.961 "subsystem": "keyring", 00:25:33.961 "config": [ 00:25:33.961 { 00:25:33.961 "method": "keyring_file_add_key", 00:25:33.961 "params": { 00:25:33.961 "name": "key0", 00:25:33.961 "path": "/tmp/tmp.TUXgXkn2vk" 00:25:33.961 } 00:25:33.961 } 00:25:33.961 ] 00:25:33.961 }, 00:25:33.961 { 00:25:33.961 "subsystem": "iobuf", 00:25:33.961 "config": [ 00:25:33.961 { 00:25:33.961 "method": "iobuf_set_options", 00:25:33.961 "params": { 00:25:33.961 "small_pool_count": 8192, 00:25:33.961 "large_pool_count": 1024, 00:25:33.961 "small_bufsize": 8192, 00:25:33.961 "large_bufsize": 135168 00:25:33.961 } 00:25:33.961 } 00:25:33.961 ] 00:25:33.961 }, 00:25:33.961 { 00:25:33.961 "subsystem": "sock", 00:25:33.961 "config": [ 00:25:33.961 { 00:25:33.961 "method": "sock_set_default_impl", 00:25:33.961 "params": { 00:25:33.961 "impl_name": "posix" 00:25:33.961 } 00:25:33.961 }, 00:25:33.961 { 00:25:33.961 "method": "sock_impl_set_options", 00:25:33.961 "params": { 00:25:33.961 "impl_name": "ssl", 00:25:33.961 "recv_buf_size": 4096, 00:25:33.961 "send_buf_size": 4096, 00:25:33.961 "enable_recv_pipe": true, 00:25:33.961 "enable_quickack": false, 00:25:33.961 "enable_placement_id": 0, 00:25:33.961 "enable_zerocopy_send_server": true, 00:25:33.961 "enable_zerocopy_send_client": false, 00:25:33.961 "zerocopy_threshold": 0, 00:25:33.961 "tls_version": 0, 00:25:33.961 "enable_ktls": false 00:25:33.961 } 00:25:33.961 }, 00:25:33.961 { 00:25:33.961 "method": "sock_impl_set_options", 00:25:33.961 "params": { 00:25:33.961 "impl_name": "posix", 00:25:33.961 "recv_buf_size": 2097152, 00:25:33.961 "send_buf_size": 2097152, 00:25:33.961 "enable_recv_pipe": true, 00:25:33.961 "enable_quickack": false, 00:25:33.961 "enable_placement_id": 0, 00:25:33.961 "enable_zerocopy_send_server": true, 00:25:33.961 "enable_zerocopy_send_client": false, 00:25:33.961 "zerocopy_threshold": 0, 00:25:33.961 "tls_version": 0, 00:25:33.961 "enable_ktls": false 00:25:33.961 } 00:25:33.961 } 00:25:33.961 ] 00:25:33.961 }, 00:25:33.961 { 00:25:33.961 "subsystem": "vmd", 00:25:33.961 "config": [] 00:25:33.961 }, 00:25:33.961 { 00:25:33.961 "subsystem": "accel", 00:25:33.961 "config": [ 00:25:33.961 { 00:25:33.961 "method": "accel_set_options", 00:25:33.961 "params": { 00:25:33.961 "small_cache_size": 128, 00:25:33.961 "large_cache_size": 16, 00:25:33.961 "task_count": 2048, 00:25:33.961 "sequence_count": 2048, 00:25:33.961 "buf_count": 2048 00:25:33.961 } 00:25:33.961 } 00:25:33.961 ] 00:25:33.961 }, 00:25:33.961 { 00:25:33.961 "subsystem": "bdev", 00:25:33.961 "config": [ 00:25:33.961 { 00:25:33.961 "method": "bdev_set_options", 00:25:33.961 "params": { 00:25:33.961 "bdev_io_pool_size": 65535, 00:25:33.961 "bdev_io_cache_size": 256, 00:25:33.961 "bdev_auto_examine": true, 00:25:33.961 "iobuf_small_cache_size": 128, 00:25:33.961 "iobuf_large_cache_size": 16 00:25:33.961 } 00:25:33.961 }, 00:25:33.961 { 00:25:33.961 "method": "bdev_raid_set_options", 00:25:33.961 "params": { 00:25:33.961 "process_window_size_kb": 1024, 00:25:33.961 "process_max_bandwidth_mb_sec": 0 00:25:33.961 } 00:25:33.961 }, 00:25:33.961 { 00:25:33.961 "method": "bdev_iscsi_set_options", 00:25:33.961 "params": { 00:25:33.961 "timeout_sec": 30 00:25:33.961 } 00:25:33.961 }, 00:25:33.961 { 00:25:33.961 "method": "bdev_nvme_set_options", 00:25:33.961 "params": { 00:25:33.961 "action_on_timeout": "none", 00:25:33.961 "timeout_us": 0, 00:25:33.961 "timeout_admin_us": 0, 00:25:33.961 "keep_alive_timeout_ms": 10000, 00:25:33.961 "arbitration_burst": 0, 00:25:33.961 "low_priority_weight": 0, 00:25:33.961 "medium_priority_weight": 0, 00:25:33.961 "high_priority_weight": 0, 00:25:33.961 "nvme_adminq_poll_period_us": 10000, 00:25:33.961 "nvme_ioq_poll_period_us": 0, 00:25:33.961 "io_queue_requests": 512, 00:25:33.961 "delay_cmd_submit": true, 00:25:33.961 "transport_retry_count": 4, 00:25:33.961 "bdev_retry_count": 3, 00:25:33.961 "transport_ack_timeout": 0, 00:25:33.961 "ctrlr_loss_timeout_sec": 0, 00:25:33.961 "reconnect_delay_sec": 0, 00:25:33.961 "fast_io_fail_timeout_sec": 0, 00:25:33.961 "disable_auto_failback": false, 00:25:33.961 "generate_uuids": false, 00:25:33.961 "transport_tos": 0, 00:25:33.961 "nvme_error_stat": false, 00:25:33.961 "rdma_srq_size": 0, 00:25:33.961 "io_path_stat": false, 00:25:33.961 "allow_accel_sequence": false, 00:25:33.961 "rdma_max_cq_size": 0, 00:25:33.961 "rdma_cm_event_timeout_ms": 0, 00:25:33.961 "dhchap_digests": [ 00:25:33.961 "sha256", 00:25:33.961 "sha384", 00:25:33.961 "sha512" 00:25:33.961 ], 00:25:33.961 "dhchap_dhgroups": [ 00:25:33.961 "null", 00:25:33.961 "ffdhe2048", 00:25:33.961 "ffdhe3072", 00:25:33.961 "ffdhe4096", 00:25:33.961 "ffdhe6144", 00:25:33.961 "ffdhe8192" 00:25:33.961 ] 00:25:33.961 } 00:25:33.961 }, 00:25:33.961 { 00:25:33.961 "method": "bdev_nvme_attach_controller", 00:25:33.961 "params": { 00:25:33.961 "name": "nvme0", 00:25:33.961 "trtype": "TCP", 00:25:33.961 "adrfam": "IPv4", 00:25:33.961 "traddr": "10.0.0.2", 00:25:33.961 "trsvcid": "4420", 00:25:33.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.961 "prchk_reftag": false, 00:25:33.961 "prchk_guard": false, 00:25:33.961 "ctrlr_loss_timeout_sec": 0, 00:25:33.961 "reconnect_delay_sec": 0, 00:25:33.961 "fast_io_fail_timeout_sec": 0, 00:25:33.961 "psk": "key0", 00:25:33.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:33.961 "hdgst": false, 00:25:33.961 "ddgst": false, 00:25:33.961 "multipath": "multipath" 00:25:33.961 } 00:25:33.961 }, 00:25:33.961 { 00:25:33.961 "method": "bdev_nvme_set_hotplug", 00:25:33.961 "params": { 00:25:33.961 "period_us": 100000, 00:25:33.961 "enable": false 00:25:33.961 } 00:25:33.961 }, 00:25:33.961 { 00:25:33.961 "method": "bdev_enable_histogram", 00:25:33.961 "params": { 00:25:33.961 "name": "nvme0n1", 00:25:33.961 "enable": true 00:25:33.961 } 00:25:33.961 }, 00:25:33.961 { 00:25:33.961 "method": "bdev_wait_for_examine" 00:25:33.961 } 00:25:33.961 ] 00:25:33.961 }, 00:25:33.961 { 00:25:33.961 "subsystem": "nbd", 00:25:33.961 "config": [] 00:25:33.961 } 00:25:33.961 ] 00:25:33.961 }' 00:25:34.222 [2024-10-13 14:21:37.710968] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:25:34.222 [2024-10-13 14:21:37.711017] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1768908 ] 00:25:34.222 [2024-10-13 14:21:37.841528] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:34.222 [2024-10-13 14:21:37.888889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.222 [2024-10-13 14:21:37.905263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.481 [2024-10-13 14:21:38.034451] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:35.050 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:35.050 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:35.050 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:35.050 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:25:35.050 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.050 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:35.050 Running I/O for 1 seconds... 00:25:36.434 4702.00 IOPS, 18.37 MiB/s 00:25:36.434 Latency(us) 00:25:36.434 [2024-10-13T12:21:40.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.434 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:36.434 Verification LBA range: start 0x0 length 0x2000 00:25:36.434 nvme0n1 : 1.02 4739.09 18.51 0.00 0.00 26796.95 5118.29 45325.60 00:25:36.434 [2024-10-13T12:21:40.141Z] =================================================================================================================== 00:25:36.434 [2024-10-13T12:21:40.141Z] Total : 4739.09 18.51 0.00 0.00 26796.95 5118.29 45325.60 00:25:36.434 { 00:25:36.434 "results": [ 00:25:36.434 { 00:25:36.434 "job": "nvme0n1", 00:25:36.434 "core_mask": "0x2", 00:25:36.434 "workload": "verify", 00:25:36.434 "status": "finished", 00:25:36.434 "verify_range": { 00:25:36.434 "start": 0, 00:25:36.434 "length": 8192 00:25:36.434 }, 00:25:36.434 "queue_depth": 128, 00:25:36.434 "io_size": 4096, 00:25:36.434 "runtime": 1.019182, 00:25:36.434 "iops": 4739.0946857381705, 00:25:36.434 "mibps": 18.51208861616473, 00:25:36.434 "io_failed": 0, 00:25:36.434 "io_timeout": 0, 00:25:36.434 "avg_latency_us": 26796.948606790585, 00:25:36.434 "min_latency_us": 5118.289341797527, 00:25:36.434 "max_latency_us": 45325.599732709656 00:25:36.434 } 00:25:36.434 ], 00:25:36.434 "core_count": 1 00:25:36.434 } 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:36.434 nvmf_trace.0 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1768908 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1768908 ']' 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1768908 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1768908 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1768908' 00:25:36.434 killing process with pid 1768908 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1768908 00:25:36.434 Received shutdown signal, test time was about 1.000000 seconds 00:25:36.434 00:25:36.434 Latency(us) 00:25:36.434 [2024-10-13T12:21:40.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.434 [2024-10-13T12:21:40.141Z] =================================================================================================================== 00:25:36.434 [2024-10-13T12:21:40.141Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:36.434 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1768908 00:25:36.434 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:36.434 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:36.434 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:25:36.434 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:36.434 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:25:36.434 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:36.434 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:36.434 rmmod nvme_tcp 00:25:36.434 rmmod nvme_fabrics 00:25:36.434 rmmod nvme_keyring 00:25:36.434 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:36.434 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:25:36.434 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:25:36.434 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 1768595 ']' 00:25:36.434 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 1768595 00:25:36.434 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1768595 ']' 00:25:36.434 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1768595 00:25:36.434 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:36.434 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:36.434 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1768595 00:25:36.695 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:36.695 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:36.695 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1768595' 00:25:36.695 killing process with pid 1768595 00:25:36.695 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1768595 00:25:36.695 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1768595 00:25:36.695 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:36.695 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:36.695 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:36.695 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:25:36.695 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:25:36.695 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:36.695 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:25:36.695 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:36.695 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:36.695 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.695 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.695 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.yWKDShppKo /tmp/tmp.45VpdRmhe4 /tmp/tmp.TUXgXkn2vk 00:25:39.241 00:25:39.241 real 1m28.845s 00:25:39.241 user 2m16.666s 00:25:39.241 sys 0m28.109s 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:39.241 ************************************ 00:25:39.241 END TEST nvmf_tls 00:25:39.241 ************************************ 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:39.241 ************************************ 00:25:39.241 START TEST nvmf_fips 00:25:39.241 ************************************ 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:39.241 * Looking for test storage... 00:25:39.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:25:39.241 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:39.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.242 --rc genhtml_branch_coverage=1 00:25:39.242 --rc genhtml_function_coverage=1 00:25:39.242 --rc genhtml_legend=1 00:25:39.242 --rc geninfo_all_blocks=1 00:25:39.242 --rc geninfo_unexecuted_blocks=1 00:25:39.242 00:25:39.242 ' 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:39.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.242 --rc genhtml_branch_coverage=1 00:25:39.242 --rc genhtml_function_coverage=1 00:25:39.242 --rc genhtml_legend=1 00:25:39.242 --rc geninfo_all_blocks=1 00:25:39.242 --rc geninfo_unexecuted_blocks=1 00:25:39.242 00:25:39.242 ' 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:39.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.242 --rc genhtml_branch_coverage=1 00:25:39.242 --rc genhtml_function_coverage=1 00:25:39.242 --rc genhtml_legend=1 00:25:39.242 --rc geninfo_all_blocks=1 00:25:39.242 --rc geninfo_unexecuted_blocks=1 00:25:39.242 00:25:39.242 ' 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:39.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.242 --rc genhtml_branch_coverage=1 00:25:39.242 --rc genhtml_function_coverage=1 00:25:39.242 --rc genhtml_legend=1 00:25:39.242 --rc geninfo_all_blocks=1 00:25:39.242 --rc geninfo_unexecuted_blocks=1 00:25:39.242 00:25:39.242 ' 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:39.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:39.242 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:25:39.243 Error setting digest 00:25:39.243 407289A8BF7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:25:39.243 407289A8BF7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:25:39.243 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:47.384 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:47.384 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:47.384 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:47.385 Found net devices under 0000:31:00.0: cvl_0_0 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:47.385 Found net devices under 0000:31:00.1: cvl_0_1 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:47.385 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:47.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:47.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:25:47.385 00:25:47.385 --- 10.0.0.2 ping statistics --- 00:25:47.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.385 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:47.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:47.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:25:47.385 00:25:47.385 --- 10.0.0.1 ping statistics --- 00:25:47.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.385 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=1773706 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 1773706 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1773706 ']' 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:47.385 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:47.385 [2024-10-13 14:21:50.344557] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:25:47.385 [2024-10-13 14:21:50.344623] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.385 [2024-10-13 14:21:50.486713] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:47.385 [2024-10-13 14:21:50.536586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.385 [2024-10-13 14:21:50.558494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.385 [2024-10-13 14:21:50.558529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.385 [2024-10-13 14:21:50.558536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.385 [2024-10-13 14:21:50.558542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.385 [2024-10-13 14:21:50.558548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.385 [2024-10-13 14:21:50.559226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.647 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:47.647 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:25:47.647 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:47.647 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:47.647 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:47.647 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.647 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:25:47.647 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:47.647 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:25:47.647 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.3zw 00:25:47.647 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:47.647 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.3zw 00:25:47.647 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.3zw 00:25:47.647 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.3zw 00:25:47.647 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:47.647 [2024-10-13 14:21:51.344364] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.908 [2024-10-13 14:21:51.360333] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:47.908 [2024-10-13 14:21:51.360506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.908 malloc0 00:25:47.908 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:47.908 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1773794 00:25:47.908 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1773794 /var/tmp/bdevperf.sock 00:25:47.908 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:47.908 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1773794 ']' 00:25:47.908 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:47.908 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:47.908 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:47.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:47.908 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:47.908 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:47.908 [2024-10-13 14:21:51.503808] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:25:47.908 [2024-10-13 14:21:51.503867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1773794 ] 00:25:48.168 [2024-10-13 14:21:51.634194] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:48.168 [2024-10-13 14:21:51.684266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.168 [2024-10-13 14:21:51.701802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:48.739 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:48.739 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:25:48.739 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.3zw 00:25:48.999 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:48.999 [2024-10-13 14:21:52.592578] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:48.999 TLSTESTn1 00:25:48.999 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:49.259 Running I/O for 10 seconds... 00:25:51.142 5569.00 IOPS, 21.75 MiB/s [2024-10-13T12:21:55.789Z] 5400.50 IOPS, 21.10 MiB/s [2024-10-13T12:21:57.173Z] 5393.00 IOPS, 21.07 MiB/s [2024-10-13T12:21:58.113Z] 5510.75 IOPS, 21.53 MiB/s [2024-10-13T12:21:59.055Z] 5675.20 IOPS, 22.17 MiB/s [2024-10-13T12:21:59.996Z] 5724.00 IOPS, 22.36 MiB/s [2024-10-13T12:22:00.938Z] 5566.86 IOPS, 21.75 MiB/s [2024-10-13T12:22:01.878Z] 5493.00 IOPS, 21.46 MiB/s [2024-10-13T12:22:02.818Z] 5497.67 IOPS, 21.48 MiB/s [2024-10-13T12:22:03.201Z] 5481.00 IOPS, 21.41 MiB/s 00:25:59.494 Latency(us) 00:25:59.494 [2024-10-13T12:22:03.201Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.494 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:59.494 Verification LBA range: start 0x0 length 0x2000 00:25:59.494 TLSTESTn1 : 10.05 5465.61 21.35 0.00 0.00 23345.89 6295.22 63499.63 00:25:59.494 [2024-10-13T12:22:03.201Z] =================================================================================================================== 00:25:59.494 [2024-10-13T12:22:03.201Z] Total : 5465.61 21.35 0.00 0.00 23345.89 6295.22 63499.63 00:25:59.494 { 00:25:59.494 "results": [ 00:25:59.494 { 00:25:59.494 "job": "TLSTESTn1", 00:25:59.494 "core_mask": "0x4", 00:25:59.494 "workload": "verify", 00:25:59.494 "status": "finished", 00:25:59.494 "verify_range": { 00:25:59.494 "start": 0, 00:25:59.494 "length": 8192 00:25:59.494 }, 00:25:59.494 "queue_depth": 128, 00:25:59.494 "io_size": 4096, 00:25:59.494 "runtime": 10.051403, 00:25:59.494 "iops": 5465.605149848235, 00:25:59.494 "mibps": 21.350020116594667, 00:25:59.494 "io_failed": 0, 00:25:59.494 "io_timeout": 0, 00:25:59.494 "avg_latency_us": 23345.893334515462, 00:25:59.494 "min_latency_us": 6295.222185098563, 00:25:59.494 "max_latency_us": 63499.632475776816 00:25:59.494 } 00:25:59.494 ], 00:25:59.494 "core_count": 1 00:25:59.494 } 00:25:59.494 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:59.494 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:59.494 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:25:59.494 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:25:59.494 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:25:59.494 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:59.494 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:25:59.494 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:25:59.494 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:25:59.494 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:59.494 nvmf_trace.0 00:25:59.494 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:25:59.494 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1773794 00:25:59.494 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1773794 ']' 00:25:59.494 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1773794 00:25:59.494 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:59.494 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:59.494 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1773794 00:25:59.494 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:59.494 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:59.494 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1773794' 00:25:59.495 killing process with pid 1773794 00:25:59.495 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1773794 00:25:59.495 Received shutdown signal, test time was about 10.000000 seconds 00:25:59.495 00:25:59.495 Latency(us) 00:25:59.495 [2024-10-13T12:22:03.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.495 [2024-10-13T12:22:03.202Z] =================================================================================================================== 00:25:59.495 [2024-10-13T12:22:03.202Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:59.495 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1773794 00:25:59.495 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:59.495 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:59.495 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:59.495 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:59.495 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:59.495 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:59.495 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:59.495 rmmod nvme_tcp 00:25:59.495 rmmod nvme_fabrics 00:25:59.495 rmmod nvme_keyring 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 1773706 ']' 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 1773706 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1773706 ']' 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1773706 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1773706 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1773706' 00:25:59.793 killing process with pid 1773706 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1773706 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1773706 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.793 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.339 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:02.339 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.3zw 00:26:02.339 00:26:02.339 real 0m22.992s 00:26:02.339 user 0m24.555s 00:26:02.340 sys 0m9.420s 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:02.340 ************************************ 00:26:02.340 END TEST nvmf_fips 00:26:02.340 ************************************ 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:02.340 ************************************ 00:26:02.340 START TEST nvmf_control_msg_list 00:26:02.340 ************************************ 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:26:02.340 * Looking for test storage... 00:26:02.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:02.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.340 --rc genhtml_branch_coverage=1 00:26:02.340 --rc genhtml_function_coverage=1 00:26:02.340 --rc genhtml_legend=1 00:26:02.340 --rc geninfo_all_blocks=1 00:26:02.340 --rc geninfo_unexecuted_blocks=1 00:26:02.340 00:26:02.340 ' 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:02.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.340 --rc genhtml_branch_coverage=1 00:26:02.340 --rc genhtml_function_coverage=1 00:26:02.340 --rc genhtml_legend=1 00:26:02.340 --rc geninfo_all_blocks=1 00:26:02.340 --rc geninfo_unexecuted_blocks=1 00:26:02.340 00:26:02.340 ' 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:02.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.340 --rc genhtml_branch_coverage=1 00:26:02.340 --rc genhtml_function_coverage=1 00:26:02.340 --rc genhtml_legend=1 00:26:02.340 --rc geninfo_all_blocks=1 00:26:02.340 --rc geninfo_unexecuted_blocks=1 00:26:02.340 00:26:02.340 ' 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:02.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.340 --rc genhtml_branch_coverage=1 00:26:02.340 --rc genhtml_function_coverage=1 00:26:02.340 --rc genhtml_legend=1 00:26:02.340 --rc geninfo_all_blocks=1 00:26:02.340 --rc geninfo_unexecuted_blocks=1 00:26:02.340 00:26:02.340 ' 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:02.340 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.341 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.341 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.341 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:02.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:02.341 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:02.341 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:02.341 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:02.341 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:26:02.341 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:02.341 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.341 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:02.341 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:02.341 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:02.341 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.341 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.341 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.341 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:02.341 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:02.341 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:26:02.341 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:10.482 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:10.482 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:10.483 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:10.483 Found net devices under 0000:31:00.0: cvl_0_0 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:10.483 Found net devices under 0000:31:00.1: cvl_0_1 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:10.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:10.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:26:10.483 00:26:10.483 --- 10.0.0.2 ping statistics --- 00:26:10.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.483 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:10.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:10.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:26:10.483 00:26:10.483 --- 10.0.0.1 ping statistics --- 00:26:10.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.483 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=1780480 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 1780480 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 1780480 ']' 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:10.483 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:10.483 [2024-10-13 14:22:13.516079] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:26:10.483 [2024-10-13 14:22:13.516132] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:10.483 [2024-10-13 14:22:13.652846] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:10.483 [2024-10-13 14:22:13.688633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.483 [2024-10-13 14:22:13.706033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:10.483 [2024-10-13 14:22:13.706082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:10.483 [2024-10-13 14:22:13.706091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:10.483 [2024-10-13 14:22:13.706097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:10.483 [2024-10-13 14:22:13.706103] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:10.483 [2024-10-13 14:22:13.706729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:10.745 [2024-10-13 14:22:14.371902] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:10.745 Malloc0 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:10.745 [2024-10-13 14:22:14.424205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1780527 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1780529 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1780531 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1780527 00:26:10.745 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:11.007 [2024-10-13 14:22:14.604604] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:11.007 [2024-10-13 14:22:14.614658] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:11.007 [2024-10-13 14:22:14.614974] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:12.392 Initializing NVMe Controllers 00:26:12.392 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:12.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:26:12.392 Initialization complete. Launching workers. 00:26:12.392 ======================================================== 00:26:12.392 Latency(us) 00:26:12.392 Device Information : IOPS MiB/s Average min max 00:26:12.392 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2177.00 8.50 459.18 131.27 737.66 00:26:12.392 ======================================================== 00:26:12.392 Total : 2177.00 8.50 459.18 131.27 737.66 00:26:12.392 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1780529 00:26:12.392 Initializing NVMe Controllers 00:26:12.392 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:12.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:26:12.392 Initialization complete. Launching workers. 00:26:12.392 ======================================================== 00:26:12.392 Latency(us) 00:26:12.392 Device Information : IOPS MiB/s Average min max 00:26:12.392 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2122.00 8.29 470.96 129.16 779.99 00:26:12.392 ======================================================== 00:26:12.392 Total : 2122.00 8.29 470.96 129.16 779.99 00:26:12.392 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1780531 00:26:12.392 Initializing NVMe Controllers 00:26:12.392 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:12.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:26:12.392 Initialization complete. Launching workers. 00:26:12.392 ======================================================== 00:26:12.392 Latency(us) 00:26:12.392 Device Information : IOPS MiB/s Average min max 00:26:12.392 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41020.44 40911.48 41512.66 00:26:12.392 ======================================================== 00:26:12.392 Total : 25.00 0.10 41020.44 40911.48 41512.66 00:26:12.392 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:12.392 rmmod nvme_tcp 00:26:12.392 rmmod nvme_fabrics 00:26:12.392 rmmod nvme_keyring 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 1780480 ']' 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 1780480 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 1780480 ']' 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 1780480 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:12.392 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1780480 00:26:12.392 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:12.392 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:12.392 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1780480' 00:26:12.392 killing process with pid 1780480 00:26:12.392 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 1780480 00:26:12.392 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 1780480 00:26:12.653 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:12.653 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:12.653 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:12.653 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:26:12.653 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:12.654 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:26:12.654 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:26:12.654 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:12.654 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:12.654 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.654 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.654 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.565 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:14.565 00:26:14.565 real 0m12.713s 00:26:14.565 user 0m8.026s 00:26:14.565 sys 0m6.667s 00:26:14.565 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:14.565 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:14.565 ************************************ 00:26:14.565 END TEST nvmf_control_msg_list 00:26:14.565 ************************************ 00:26:14.825 14:22:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:26:14.825 14:22:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:14.825 14:22:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:14.825 14:22:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:14.825 ************************************ 00:26:14.825 START TEST nvmf_wait_for_buf 00:26:14.825 ************************************ 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:26:14.826 * Looking for test storage... 00:26:14.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:14.826 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:26:15.087 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:15.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.088 --rc genhtml_branch_coverage=1 00:26:15.088 --rc genhtml_function_coverage=1 00:26:15.088 --rc genhtml_legend=1 00:26:15.088 --rc geninfo_all_blocks=1 00:26:15.088 --rc geninfo_unexecuted_blocks=1 00:26:15.088 00:26:15.088 ' 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:15.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.088 --rc genhtml_branch_coverage=1 00:26:15.088 --rc genhtml_function_coverage=1 00:26:15.088 --rc genhtml_legend=1 00:26:15.088 --rc geninfo_all_blocks=1 00:26:15.088 --rc geninfo_unexecuted_blocks=1 00:26:15.088 00:26:15.088 ' 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:15.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.088 --rc genhtml_branch_coverage=1 00:26:15.088 --rc genhtml_function_coverage=1 00:26:15.088 --rc genhtml_legend=1 00:26:15.088 --rc geninfo_all_blocks=1 00:26:15.088 --rc geninfo_unexecuted_blocks=1 00:26:15.088 00:26:15.088 ' 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:15.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.088 --rc genhtml_branch_coverage=1 00:26:15.088 --rc genhtml_function_coverage=1 00:26:15.088 --rc genhtml_legend=1 00:26:15.088 --rc geninfo_all_blocks=1 00:26:15.088 --rc geninfo_unexecuted_blocks=1 00:26:15.088 00:26:15.088 ' 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:15.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:15.088 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:23.228 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:23.228 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:23.228 Found net devices under 0000:31:00.0: cvl_0_0 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:23.228 Found net devices under 0000:31:00.1: cvl_0_1 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:23.228 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:23.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:23.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:26:23.229 00:26:23.229 --- 10.0.0.2 ping statistics --- 00:26:23.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.229 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:23.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:23.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:26:23.229 00:26:23.229 --- 10.0.0.1 ping statistics --- 00:26:23.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.229 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=1785249 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 1785249 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 1785249 ']' 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:23.229 14:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:23.229 [2024-10-13 14:22:26.048105] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:26:23.229 [2024-10-13 14:22:26.048159] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.229 [2024-10-13 14:22:26.186162] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:23.229 [2024-10-13 14:22:26.236825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.229 [2024-10-13 14:22:26.262818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.229 [2024-10-13 14:22:26.262863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.229 [2024-10-13 14:22:26.262871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:23.229 [2024-10-13 14:22:26.262878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:23.229 [2024-10-13 14:22:26.262884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.229 [2024-10-13 14:22:26.263644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.229 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:23.229 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:26:23.229 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:23.229 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:23.229 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:23.229 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.229 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:26:23.229 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:23.229 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:26:23.229 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.229 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:23.229 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.229 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:26:23.229 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.229 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:23.229 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.229 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:26:23.229 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.229 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:23.490 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.490 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:26:23.490 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.490 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:23.490 Malloc0 00:26:23.490 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.490 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:26:23.490 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.490 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:23.490 [2024-10-13 14:22:27.012396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:23.490 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.490 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:26:23.490 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.490 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:23.490 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.490 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:26:23.490 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.490 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:23.490 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.490 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:23.490 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.490 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:23.490 [2024-10-13 14:22:27.048609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.490 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.490 14:22:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:23.750 [2024-10-13 14:22:27.238611] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:25.134 Initializing NVMe Controllers 00:26:25.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:25.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:26:25.134 Initialization complete. Launching workers. 00:26:25.134 ======================================================== 00:26:25.134 Latency(us) 00:26:25.134 Device Information : IOPS MiB/s Average min max 00:26:25.134 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 166390.17 47982.11 192005.19 00:26:25.134 ======================================================== 00:26:25.134 Total : 25.00 3.12 166390.17 47982.11 192005.19 00:26:25.134 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:25.134 rmmod nvme_tcp 00:26:25.134 rmmod nvme_fabrics 00:26:25.134 rmmod nvme_keyring 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 1785249 ']' 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 1785249 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 1785249 ']' 00:26:25.134 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 1785249 00:26:25.395 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:26:25.395 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:25.395 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1785249 00:26:25.395 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:25.395 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:25.395 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1785249' 00:26:25.395 killing process with pid 1785249 00:26:25.395 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 1785249 00:26:25.395 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 1785249 00:26:25.395 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:25.395 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:25.395 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:25.395 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:26:25.395 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:26:25.395 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:25.395 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:26:25.395 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:25.395 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:25.395 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.395 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.395 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:27.940 00:26:27.940 real 0m12.795s 00:26:27.940 user 0m5.163s 00:26:27.940 sys 0m6.090s 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:27.940 ************************************ 00:26:27.940 END TEST nvmf_wait_for_buf 00:26:27.940 ************************************ 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:27.940 ************************************ 00:26:27.940 START TEST nvmf_fuzz 00:26:27.940 ************************************ 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:27.940 * Looking for test storage... 00:26:27.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:27.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.940 --rc genhtml_branch_coverage=1 00:26:27.940 --rc genhtml_function_coverage=1 00:26:27.940 --rc genhtml_legend=1 00:26:27.940 --rc geninfo_all_blocks=1 00:26:27.940 --rc geninfo_unexecuted_blocks=1 00:26:27.940 00:26:27.940 ' 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:27.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.940 --rc genhtml_branch_coverage=1 00:26:27.940 --rc genhtml_function_coverage=1 00:26:27.940 --rc genhtml_legend=1 00:26:27.940 --rc geninfo_all_blocks=1 00:26:27.940 --rc geninfo_unexecuted_blocks=1 00:26:27.940 00:26:27.940 ' 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:27.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.940 --rc genhtml_branch_coverage=1 00:26:27.940 --rc genhtml_function_coverage=1 00:26:27.940 --rc genhtml_legend=1 00:26:27.940 --rc geninfo_all_blocks=1 00:26:27.940 --rc geninfo_unexecuted_blocks=1 00:26:27.940 00:26:27.940 ' 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:27.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.940 --rc genhtml_branch_coverage=1 00:26:27.940 --rc genhtml_function_coverage=1 00:26:27.940 --rc genhtml_legend=1 00:26:27.940 --rc geninfo_all_blocks=1 00:26:27.940 --rc geninfo_unexecuted_blocks=1 00:26:27.940 00:26:27.940 ' 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.940 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:27.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:26:27.941 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:36.084 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:36.084 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:36.084 Found net devices under 0000:31:00.0: cvl_0_0 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:36.084 Found net devices under 0000:31:00.1: cvl_0_1 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # is_hw=yes 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.084 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.085 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.085 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:36.085 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.085 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.085 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:36.085 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:36.085 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.085 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.085 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:36.085 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:36.085 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.085 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.085 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.085 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.085 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:36.085 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:36.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:26:36.085 00:26:36.085 --- 10.0.0.2 ping statistics --- 00:26:36.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.085 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:26:36.085 00:26:36.085 --- 10.0.0.1 ping statistics --- 00:26:36.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.085 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # return 0 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1790002 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1790002 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1790002 ']' 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:36.085 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:36.347 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:36.347 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:26:36.347 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.347 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.347 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:36.347 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.347 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:26:36.347 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.347 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:36.608 Malloc0 00:26:36.608 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.608 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:36.608 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.608 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:36.608 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.608 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:36.608 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.608 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:36.608 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.608 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.608 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.608 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:36.608 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.608 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:26:36.608 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:27:08.727 Fuzzing completed. Shutting down the fuzz application 00:27:08.727 00:27:08.727 Dumping successful admin opcodes: 00:27:08.727 8, 9, 10, 24, 00:27:08.727 Dumping successful io opcodes: 00:27:08.727 0, 9, 00:27:08.727 NS: 0x2000008eff00 I/O qp, Total commands completed: 1161739, total successful commands: 6837, random_seed: 3321358080 00:27:08.727 NS: 0x2000008eff00 admin qp, Total commands completed: 148987, total successful commands: 1200, random_seed: 3992969088 00:27:08.727 14:23:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:27:08.727 Fuzzing completed. Shutting down the fuzz application 00:27:08.727 00:27:08.727 Dumping successful admin opcodes: 00:27:08.727 24, 00:27:08.727 Dumping successful io opcodes: 00:27:08.727 00:27:08.727 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3002975459 00:27:08.727 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3003046707 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:08.727 rmmod nvme_tcp 00:27:08.727 rmmod nvme_fabrics 00:27:08.727 rmmod nvme_keyring 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@515 -- # '[' -n 1790002 ']' 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # killprocess 1790002 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1790002 ']' 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 1790002 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1790002 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1790002' 00:27:08.727 killing process with pid 1790002 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 1790002 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 1790002 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-save 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-restore 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.727 14:23:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.653 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:10.653 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:27:10.653 00:27:10.653 real 0m42.916s 00:27:10.653 user 0m56.238s 00:27:10.653 sys 0m15.588s 00:27:10.653 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:10.653 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:10.653 ************************************ 00:27:10.653 END TEST nvmf_fuzz 00:27:10.653 ************************************ 00:27:10.653 14:23:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:10.653 14:23:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:10.653 14:23:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:10.653 14:23:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:10.653 ************************************ 00:27:10.653 START TEST nvmf_multiconnection 00:27:10.653 ************************************ 00:27:10.653 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:10.653 * Looking for test storage... 00:27:10.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:10.653 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:10.653 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lcov --version 00:27:10.653 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:10.915 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:10.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.916 --rc genhtml_branch_coverage=1 00:27:10.916 --rc genhtml_function_coverage=1 00:27:10.916 --rc genhtml_legend=1 00:27:10.916 --rc geninfo_all_blocks=1 00:27:10.916 --rc geninfo_unexecuted_blocks=1 00:27:10.916 00:27:10.916 ' 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:10.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.916 --rc genhtml_branch_coverage=1 00:27:10.916 --rc genhtml_function_coverage=1 00:27:10.916 --rc genhtml_legend=1 00:27:10.916 --rc geninfo_all_blocks=1 00:27:10.916 --rc geninfo_unexecuted_blocks=1 00:27:10.916 00:27:10.916 ' 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:10.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.916 --rc genhtml_branch_coverage=1 00:27:10.916 --rc genhtml_function_coverage=1 00:27:10.916 --rc genhtml_legend=1 00:27:10.916 --rc geninfo_all_blocks=1 00:27:10.916 --rc geninfo_unexecuted_blocks=1 00:27:10.916 00:27:10.916 ' 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:10.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.916 --rc genhtml_branch_coverage=1 00:27:10.916 --rc genhtml_function_coverage=1 00:27:10.916 --rc genhtml_legend=1 00:27:10.916 --rc geninfo_all_blocks=1 00:27:10.916 --rc geninfo_unexecuted_blocks=1 00:27:10.916 00:27:10.916 ' 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:10.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:27:10.916 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:19.089 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:19.089 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:19.089 Found net devices under 0000:31:00.0: cvl_0_0 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:19.089 Found net devices under 0000:31:00.1: cvl_0_1 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # is_hw=yes 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:19.089 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:19.089 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:19.089 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:19.089 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:19.089 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:19.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:19.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:27:19.089 00:27:19.089 --- 10.0.0.2 ping statistics --- 00:27:19.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.089 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:27:19.089 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:19.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:19.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:27:19.090 00:27:19.090 --- 10.0.0.1 ping statistics --- 00:27:19.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.090 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # return 0 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # nvmfpid=1800435 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # waitforlisten 1800435 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 1800435 ']' 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:19.090 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.090 [2024-10-13 14:23:22.210394] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:27:19.090 [2024-10-13 14:23:22.210469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.090 [2024-10-13 14:23:22.353864] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:19.090 [2024-10-13 14:23:22.402143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:19.090 [2024-10-13 14:23:22.432019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.090 [2024-10-13 14:23:22.432077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.090 [2024-10-13 14:23:22.432085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.090 [2024-10-13 14:23:22.432092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.090 [2024-10-13 14:23:22.432098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.090 [2024-10-13 14:23:22.434092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.090 [2024-10-13 14:23:22.434194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:19.090 [2024-10-13 14:23:22.434519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:19.090 [2024-10-13 14:23:22.434521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.352 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:19.352 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:27:19.352 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:19.352 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:19.352 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.613 [2024-10-13 14:23:23.090302] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.613 Malloc1 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.613 [2024-10-13 14:23:23.173147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:19.613 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.614 Malloc2 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.614 Malloc3 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.614 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.876 Malloc4 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.876 Malloc5 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.876 Malloc6 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.876 Malloc7 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:19.876 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:27:19.877 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.877 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.877 Malloc8 00:27:19.877 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.877 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:27:19.877 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.877 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:20.138 Malloc9 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:20.138 Malloc10 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:20.138 Malloc11 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:20.138 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.139 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:27:20.139 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.139 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:20.139 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.139 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:27:20.139 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.139 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:20.139 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.139 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:27:20.139 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:20.139 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:22.054 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:27:22.054 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:22.054 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:22.054 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:22.054 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:23.969 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:23.969 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:23.969 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:27:23.969 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:23.969 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:23.969 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:23.969 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:23.969 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:27:25.368 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:27:25.368 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:25.368 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:25.368 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:25.368 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:27.282 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:27.282 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:27.282 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:27:27.282 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:27.282 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:27.282 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:27.282 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:27.282 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:27:29.194 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:27:29.194 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:29.194 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:29.194 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:29.194 14:23:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:31.107 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:31.107 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:31.107 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:27:31.107 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:31.107 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:31.107 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:31.107 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:31.108 14:23:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:27:33.020 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:27:33.020 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:33.020 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:33.020 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:33.020 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:34.943 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:34.943 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:34.943 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:27:34.943 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:34.943 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:34.943 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:34.943 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:34.943 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:27:36.328 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:27:36.328 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:36.328 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:36.328 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:36.328 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:38.244 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:38.244 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:38.244 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:27:38.505 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:38.505 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:38.505 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:38.505 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:38.505 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:27:39.888 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:27:39.888 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:39.888 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:39.888 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:39.888 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:42.431 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:42.431 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:42.431 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:27:42.431 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:42.431 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:42.431 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:42.431 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:42.431 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:27:43.817 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:27:43.817 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:43.817 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:43.817 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:43.817 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:45.737 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:45.737 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:45.737 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:27:45.737 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:45.737 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:45.737 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:45.737 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:45.737 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:27:47.652 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:27:47.652 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:47.652 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:47.652 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:47.652 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:49.566 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:49.566 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:49.566 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:27:49.566 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:49.566 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:49.566 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:49.566 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:49.566 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:27:51.490 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:27:51.490 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:51.490 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:51.490 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:51.490 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:53.403 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:53.403 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:53.403 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:27:53.403 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:53.403 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:53.403 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:53.403 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:53.403 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:55.315 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:55.315 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:55.315 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:55.315 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:55.315 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:57.227 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:57.227 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:57.227 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:27:57.227 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:57.227 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:57.227 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:57.227 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:57.227 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:59.141 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:59.141 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:59.141 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:59.141 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:59.141 14:24:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:01.078 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:01.078 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:01.078 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:28:01.078 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:01.078 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:01.078 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:01.078 14:24:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:28:01.078 [global] 00:28:01.078 thread=1 00:28:01.078 invalidate=1 00:28:01.078 rw=read 00:28:01.078 time_based=1 00:28:01.078 runtime=10 00:28:01.078 ioengine=libaio 00:28:01.078 direct=1 00:28:01.078 bs=262144 00:28:01.078 iodepth=64 00:28:01.078 norandommap=1 00:28:01.078 numjobs=1 00:28:01.078 00:28:01.078 [job0] 00:28:01.078 filename=/dev/nvme0n1 00:28:01.078 [job1] 00:28:01.078 filename=/dev/nvme10n1 00:28:01.078 [job2] 00:28:01.078 filename=/dev/nvme1n1 00:28:01.078 [job3] 00:28:01.078 filename=/dev/nvme2n1 00:28:01.078 [job4] 00:28:01.078 filename=/dev/nvme3n1 00:28:01.078 [job5] 00:28:01.078 filename=/dev/nvme4n1 00:28:01.078 [job6] 00:28:01.078 filename=/dev/nvme5n1 00:28:01.078 [job7] 00:28:01.078 filename=/dev/nvme6n1 00:28:01.078 [job8] 00:28:01.078 filename=/dev/nvme7n1 00:28:01.078 [job9] 00:28:01.078 filename=/dev/nvme8n1 00:28:01.078 [job10] 00:28:01.078 filename=/dev/nvme9n1 00:28:01.078 Could not set queue depth (nvme0n1) 00:28:01.078 Could not set queue depth (nvme10n1) 00:28:01.078 Could not set queue depth (nvme1n1) 00:28:01.078 Could not set queue depth (nvme2n1) 00:28:01.078 Could not set queue depth (nvme3n1) 00:28:01.078 Could not set queue depth (nvme4n1) 00:28:01.078 Could not set queue depth (nvme5n1) 00:28:01.078 Could not set queue depth (nvme6n1) 00:28:01.078 Could not set queue depth (nvme7n1) 00:28:01.078 Could not set queue depth (nvme8n1) 00:28:01.078 Could not set queue depth (nvme9n1) 00:28:01.396 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.396 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.396 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.396 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.396 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.396 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.396 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.396 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.396 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.396 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.396 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.396 fio-3.35 00:28:01.396 Starting 11 threads 00:28:13.761 00:28:13.761 job0: (groupid=0, jobs=1): err= 0: pid=1808989: Sun Oct 13 14:24:15 2024 00:28:13.761 read: IOPS=354, BW=88.5MiB/s (92.8MB/s)(899MiB/10154msec) 00:28:13.761 slat (usec): min=10, max=244194, avg=2780.09, stdev=12703.66 00:28:13.761 clat (msec): min=11, max=965, avg=177.58, stdev=182.32 00:28:13.761 lat (msec): min=12, max=965, avg=180.36, stdev=185.00 00:28:13.761 clat percentiles (msec): 00:28:13.761 | 1.00th=[ 31], 5.00th=[ 37], 10.00th=[ 39], 20.00th=[ 42], 00:28:13.761 | 30.00th=[ 47], 40.00th=[ 72], 50.00th=[ 140], 60.00th=[ 167], 00:28:13.761 | 70.00th=[ 203], 80.00th=[ 255], 90.00th=[ 380], 95.00th=[ 651], 00:28:13.761 | 99.00th=[ 885], 99.50th=[ 894], 99.90th=[ 969], 99.95th=[ 969], 00:28:13.761 | 99.99th=[ 969] 00:28:13.761 bw ( KiB/s): min=16384, max=389120, per=11.46%, avg=90419.20, stdev=92079.33, samples=20 00:28:13.761 iops : min= 64, max= 1520, avg=353.20, stdev=359.68, samples=20 00:28:13.761 lat (msec) : 20=0.36%, 50=33.84%, 100=10.54%, 250=34.57%, 500=14.43% 00:28:13.761 lat (msec) : 750=3.09%, 1000=3.17% 00:28:13.761 cpu : usr=0.12%, sys=1.31%, ctx=572, majf=0, minf=4097 00:28:13.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:28:13.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:13.761 issued rwts: total=3596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.761 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:13.761 job1: (groupid=0, jobs=1): err= 0: pid=1809007: Sun Oct 13 14:24:15 2024 00:28:13.761 read: IOPS=300, BW=75.0MiB/s (78.7MB/s)(761MiB/10149msec) 00:28:13.761 slat (usec): min=10, max=226357, avg=2112.07, stdev=10615.17 00:28:13.761 clat (msec): min=2, max=894, avg=210.79, stdev=174.23 00:28:13.761 lat (msec): min=2, max=955, avg=212.90, stdev=175.99 00:28:13.761 clat percentiles (msec): 00:28:13.761 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 21], 20.00th=[ 91], 00:28:13.761 | 30.00th=[ 124], 40.00th=[ 140], 50.00th=[ 167], 60.00th=[ 209], 00:28:13.761 | 70.00th=[ 247], 80.00th=[ 300], 90.00th=[ 422], 95.00th=[ 600], 00:28:13.761 | 99.00th=[ 810], 99.50th=[ 852], 99.90th=[ 869], 99.95th=[ 894], 00:28:13.761 | 99.99th=[ 894] 00:28:13.761 bw ( KiB/s): min=17920, max=159744, per=9.67%, avg=76339.20, stdev=36375.68, samples=20 00:28:13.761 iops : min= 70, max= 624, avg=298.20, stdev=142.09, samples=20 00:28:13.761 lat (msec) : 4=1.25%, 10=7.22%, 20=1.48%, 50=6.17%, 100=4.86% 00:28:13.761 lat (msec) : 250=50.02%, 500=22.92%, 750=2.96%, 1000=3.12% 00:28:13.761 cpu : usr=0.12%, sys=1.13%, ctx=915, majf=0, minf=3534 00:28:13.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:28:13.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:13.761 issued rwts: total=3045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.761 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:13.761 job2: (groupid=0, jobs=1): err= 0: pid=1809028: Sun Oct 13 14:24:15 2024 00:28:13.761 read: IOPS=390, BW=97.6MiB/s (102MB/s)(982MiB/10057msec) 00:28:13.761 slat (usec): min=9, max=218162, avg=2008.22, stdev=10135.35 00:28:13.761 clat (usec): min=1112, max=698435, avg=161717.68, stdev=157906.78 00:28:13.761 lat (usec): min=1161, max=719677, avg=163725.90, stdev=159781.31 00:28:13.761 clat percentiles (msec): 00:28:13.761 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 30], 20.00th=[ 44], 00:28:13.761 | 30.00th=[ 56], 40.00th=[ 64], 50.00th=[ 83], 60.00th=[ 144], 00:28:13.761 | 70.00th=[ 178], 80.00th=[ 300], 90.00th=[ 409], 95.00th=[ 518], 00:28:13.761 | 99.00th=[ 651], 99.50th=[ 667], 99.90th=[ 701], 99.95th=[ 701], 00:28:13.761 | 99.99th=[ 701] 00:28:13.761 bw ( KiB/s): min=23040, max=331264, per=12.54%, avg=98944.00, stdev=96932.73, samples=20 00:28:13.761 iops : min= 90, max= 1294, avg=386.50, stdev=378.64, samples=20 00:28:13.761 lat (msec) : 2=0.61%, 4=0.31%, 10=1.93%, 20=2.32%, 50=20.62% 00:28:13.761 lat (msec) : 100=26.78%, 250=23.45%, 500=18.51%, 750=5.47% 00:28:13.761 cpu : usr=0.19%, sys=1.34%, ctx=912, majf=0, minf=4097 00:28:13.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:28:13.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:13.761 issued rwts: total=3928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.761 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:13.761 job3: (groupid=0, jobs=1): err= 0: pid=1809045: Sun Oct 13 14:24:15 2024 00:28:13.761 read: IOPS=192, BW=48.1MiB/s (50.4MB/s)(488MiB/10140msec) 00:28:13.761 slat (usec): min=11, max=221838, avg=3943.18, stdev=16679.48 00:28:13.761 clat (msec): min=3, max=982, avg=328.41, stdev=196.86 00:28:13.761 lat (msec): min=3, max=982, avg=332.35, stdev=199.00 00:28:13.761 clat percentiles (msec): 00:28:13.761 | 1.00th=[ 6], 5.00th=[ 81], 10.00th=[ 122], 20.00th=[ 153], 00:28:13.761 | 30.00th=[ 178], 40.00th=[ 224], 50.00th=[ 334], 60.00th=[ 384], 00:28:13.761 | 70.00th=[ 422], 80.00th=[ 472], 90.00th=[ 531], 95.00th=[ 726], 00:28:13.761 | 99.00th=[ 919], 99.50th=[ 986], 99.90th=[ 986], 99.95th=[ 986], 00:28:13.761 | 99.99th=[ 986] 00:28:13.761 bw ( KiB/s): min=12288, max=100864, per=6.12%, avg=48281.60, stdev=24610.84, samples=20 00:28:13.761 iops : min= 48, max= 394, avg=188.60, stdev=96.14, samples=20 00:28:13.761 lat (msec) : 4=0.41%, 10=1.03%, 20=0.46%, 50=1.38%, 100=3.90% 00:28:13.761 lat (msec) : 250=35.74%, 500=42.67%, 750=9.95%, 1000=4.46% 00:28:13.761 cpu : usr=0.05%, sys=0.71%, ctx=381, majf=0, minf=4097 00:28:13.761 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:28:13.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.761 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:13.761 issued rwts: total=1950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.761 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:13.761 job4: (groupid=0, jobs=1): err= 0: pid=1809053: Sun Oct 13 14:24:15 2024 00:28:13.761 read: IOPS=215, BW=54.0MiB/s (56.6MB/s)(550MiB/10185msec) 00:28:13.761 slat (usec): min=5, max=350499, avg=3168.94, stdev=15500.78 00:28:13.761 clat (msec): min=12, max=1022, avg=292.81, stdev=202.95 00:28:13.761 lat (msec): min=14, max=1046, avg=295.98, stdev=204.22 00:28:13.761 clat percentiles (msec): 00:28:13.761 | 1.00th=[ 25], 5.00th=[ 49], 10.00th=[ 104], 20.00th=[ 161], 00:28:13.761 | 30.00th=[ 178], 40.00th=[ 199], 50.00th=[ 222], 60.00th=[ 253], 00:28:13.761 | 70.00th=[ 317], 80.00th=[ 422], 90.00th=[ 634], 95.00th=[ 726], 00:28:13.761 | 99.00th=[ 944], 99.50th=[ 1003], 99.90th=[ 1020], 99.95th=[ 1020], 00:28:13.761 | 99.99th=[ 1020] 00:28:13.761 bw ( KiB/s): min=13312, max=116224, per=6.93%, avg=54656.00, stdev=31448.76, samples=20 00:28:13.761 iops : min= 52, max= 454, avg=213.50, stdev=122.85, samples=20 00:28:13.761 lat (msec) : 20=0.55%, 50=4.59%, 100=4.41%, 250=49.98%, 500=26.10% 00:28:13.761 lat (msec) : 750=10.37%, 1000=3.46%, 2000=0.55% 00:28:13.761 cpu : usr=0.08%, sys=0.75%, ctx=422, majf=0, minf=4097 00:28:13.761 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:28:13.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:13.761 issued rwts: total=2199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.761 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:13.761 job5: (groupid=0, jobs=1): err= 0: pid=1809076: Sun Oct 13 14:24:15 2024 00:28:13.761 read: IOPS=250, BW=62.6MiB/s (65.6MB/s)(627MiB/10024msec) 00:28:13.761 slat (usec): min=12, max=213521, avg=3251.35, stdev=13677.23 00:28:13.761 clat (msec): min=23, max=845, avg=252.22, stdev=169.65 00:28:13.761 lat (msec): min=27, max=891, avg=255.47, stdev=171.27 00:28:13.761 clat percentiles (msec): 00:28:13.761 | 1.00th=[ 52], 5.00th=[ 63], 10.00th=[ 89], 20.00th=[ 103], 00:28:13.761 | 30.00th=[ 111], 40.00th=[ 124], 50.00th=[ 194], 60.00th=[ 309], 00:28:13.761 | 70.00th=[ 359], 80.00th=[ 414], 90.00th=[ 477], 95.00th=[ 523], 00:28:13.761 | 99.00th=[ 802], 99.50th=[ 835], 99.90th=[ 835], 99.95th=[ 835], 00:28:13.761 | 99.99th=[ 844] 00:28:13.761 bw ( KiB/s): min=18432, max=164352, per=7.93%, avg=62617.60, stdev=44254.88, samples=20 00:28:13.761 iops : min= 72, max= 642, avg=244.60, stdev=172.87, samples=20 00:28:13.761 lat (msec) : 50=0.88%, 100=15.34%, 250=37.23%, 500=39.14%, 750=5.86% 00:28:13.761 lat (msec) : 1000=1.55% 00:28:13.761 cpu : usr=0.13%, sys=0.97%, ctx=484, majf=0, minf=4097 00:28:13.761 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:28:13.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:13.761 issued rwts: total=2509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.761 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:13.761 job6: (groupid=0, jobs=1): err= 0: pid=1809088: Sun Oct 13 14:24:15 2024 00:28:13.761 read: IOPS=319, BW=79.9MiB/s (83.8MB/s)(811MiB/10150msec) 00:28:13.761 slat (usec): min=8, max=494680, avg=2469.28, stdev=16356.05 00:28:13.761 clat (msec): min=12, max=1109, avg=197.48, stdev=221.03 00:28:13.761 lat (msec): min=14, max=1169, avg=199.95, stdev=223.70 00:28:13.761 clat percentiles (msec): 00:28:13.761 | 1.00th=[ 27], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 32], 00:28:13.761 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 131], 60.00th=[ 184], 00:28:13.761 | 70.00th=[ 264], 80.00th=[ 355], 90.00th=[ 456], 95.00th=[ 523], 00:28:13.761 | 99.00th=[ 1062], 99.50th=[ 1083], 99.90th=[ 1116], 99.95th=[ 1116], 00:28:13.761 | 99.99th=[ 1116] 00:28:13.761 bw ( KiB/s): min= 8704, max=491520, per=10.32%, avg=81413.50, stdev=110223.74, samples=20 00:28:13.761 iops : min= 34, max= 1920, avg=318.00, stdev=430.57, samples=20 00:28:13.761 lat (msec) : 20=0.31%, 50=43.46%, 100=1.79%, 250=23.37%, 500=24.48% 00:28:13.761 lat (msec) : 750=3.05%, 1000=1.51%, 2000=2.03% 00:28:13.762 cpu : usr=0.10%, sys=1.10%, ctx=539, majf=0, minf=4097 00:28:13.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:28:13.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:13.762 issued rwts: total=3244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.762 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:13.762 job7: (groupid=0, jobs=1): err= 0: pid=1809098: Sun Oct 13 14:24:15 2024 00:28:13.762 read: IOPS=330, BW=82.5MiB/s (86.5MB/s)(831MiB/10065msec) 00:28:13.762 slat (usec): min=13, max=232689, avg=2857.04, stdev=12241.24 00:28:13.762 clat (msec): min=3, max=716, avg=190.69, stdev=163.30 00:28:13.762 lat (msec): min=3, max=716, avg=193.55, stdev=165.73 00:28:13.762 clat percentiles (msec): 00:28:13.762 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 25], 20.00th=[ 54], 00:28:13.762 | 30.00th=[ 63], 40.00th=[ 118], 50.00th=[ 148], 60.00th=[ 188], 00:28:13.762 | 70.00th=[ 239], 80.00th=[ 338], 90.00th=[ 451], 95.00th=[ 510], 00:28:13.762 | 99.00th=[ 642], 99.50th=[ 684], 99.90th=[ 693], 99.95th=[ 718], 00:28:13.762 | 99.99th=[ 718] 00:28:13.762 bw ( KiB/s): min=25600, max=279040, per=10.57%, avg=83459.95, stdev=75306.38, samples=20 00:28:13.762 iops : min= 100, max= 1090, avg=326.00, stdev=294.18, samples=20 00:28:13.762 lat (msec) : 4=0.03%, 10=7.37%, 20=1.35%, 50=8.31%, 100=22.51% 00:28:13.762 lat (msec) : 250=32.29%, 500=21.79%, 750=6.35% 00:28:13.762 cpu : usr=0.13%, sys=1.29%, ctx=655, majf=0, minf=4098 00:28:13.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:28:13.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:13.762 issued rwts: total=3323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.762 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:13.762 job8: (groupid=0, jobs=1): err= 0: pid=1809111: Sun Oct 13 14:24:15 2024 00:28:13.762 read: IOPS=226, BW=56.6MiB/s (59.3MB/s)(571MiB/10098msec) 00:28:13.762 slat (usec): min=8, max=236185, avg=2855.38, stdev=12600.21 00:28:13.762 clat (usec): min=1322, max=963860, avg=279553.13, stdev=190257.54 00:28:13.762 lat (usec): min=1372, max=963886, avg=282408.52, stdev=191717.91 00:28:13.762 clat percentiles (msec): 00:28:13.762 | 1.00th=[ 3], 5.00th=[ 22], 10.00th=[ 48], 20.00th=[ 140], 00:28:13.762 | 30.00th=[ 163], 40.00th=[ 190], 50.00th=[ 228], 60.00th=[ 279], 00:28:13.762 | 70.00th=[ 363], 80.00th=[ 435], 90.00th=[ 542], 95.00th=[ 651], 00:28:13.762 | 99.00th=[ 860], 99.50th=[ 927], 99.90th=[ 961], 99.95th=[ 961], 00:28:13.762 | 99.99th=[ 961] 00:28:13.762 bw ( KiB/s): min=24576, max=138752, per=7.21%, avg=56883.20, stdev=31069.18, samples=20 00:28:13.762 iops : min= 96, max= 542, avg=222.20, stdev=121.36, samples=20 00:28:13.762 lat (msec) : 2=0.61%, 4=0.70%, 20=2.28%, 50=6.52%, 100=2.36% 00:28:13.762 lat (msec) : 250=43.15%, 500=31.33%, 750=10.72%, 1000=2.32% 00:28:13.762 cpu : usr=0.07%, sys=0.87%, ctx=437, majf=0, minf=4097 00:28:13.762 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:28:13.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:13.762 issued rwts: total=2285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.762 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:13.762 job9: (groupid=0, jobs=1): err= 0: pid=1809124: Sun Oct 13 14:24:15 2024 00:28:13.762 read: IOPS=352, BW=88.2MiB/s (92.5MB/s)(895MiB/10153msec) 00:28:13.762 slat (usec): min=12, max=283603, avg=1827.29, stdev=9770.42 00:28:13.762 clat (msec): min=2, max=753, avg=179.26, stdev=148.40 00:28:13.762 lat (msec): min=2, max=753, avg=181.08, stdev=149.52 00:28:13.762 clat percentiles (msec): 00:28:13.762 | 1.00th=[ 7], 5.00th=[ 29], 10.00th=[ 48], 20.00th=[ 75], 00:28:13.762 | 30.00th=[ 88], 40.00th=[ 97], 50.00th=[ 107], 60.00th=[ 144], 00:28:13.762 | 70.00th=[ 199], 80.00th=[ 326], 90.00th=[ 409], 95.00th=[ 472], 00:28:13.762 | 99.00th=[ 634], 99.50th=[ 718], 99.90th=[ 751], 99.95th=[ 751], 00:28:13.762 | 99.99th=[ 751] 00:28:13.762 bw ( KiB/s): min=33280, max=200192, per=11.41%, avg=90045.20, stdev=54120.23, samples=20 00:28:13.762 iops : min= 130, max= 782, avg=351.70, stdev=211.40, samples=20 00:28:13.762 lat (msec) : 4=0.06%, 10=1.54%, 20=1.06%, 50=9.44%, 100=33.09% 00:28:13.762 lat (msec) : 250=28.79%, 500=22.59%, 750=3.30%, 1000=0.14% 00:28:13.762 cpu : usr=0.16%, sys=1.32%, ctx=1037, majf=0, minf=4097 00:28:13.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:28:13.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:13.762 issued rwts: total=3581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.762 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:13.762 job10: (groupid=0, jobs=1): err= 0: pid=1809134: Sun Oct 13 14:24:15 2024 00:28:13.762 read: IOPS=171, BW=42.9MiB/s (44.9MB/s)(435MiB/10144msec) 00:28:13.762 slat (usec): min=12, max=341279, avg=5324.16, stdev=22204.50 00:28:13.762 clat (msec): min=17, max=1166, avg=367.52, stdev=217.68 00:28:13.762 lat (msec): min=18, max=1180, avg=372.84, stdev=220.44 00:28:13.762 clat percentiles (msec): 00:28:13.762 | 1.00th=[ 55], 5.00th=[ 153], 10.00th=[ 171], 20.00th=[ 197], 00:28:13.762 | 30.00th=[ 222], 40.00th=[ 251], 50.00th=[ 288], 60.00th=[ 326], 00:28:13.762 | 70.00th=[ 456], 80.00th=[ 550], 90.00th=[ 667], 95.00th=[ 768], 00:28:13.762 | 99.00th=[ 1083], 99.50th=[ 1150], 99.90th=[ 1167], 99.95th=[ 1167], 00:28:13.762 | 99.99th=[ 1167] 00:28:13.762 bw ( KiB/s): min=12800, max=84992, per=5.43%, avg=42876.75, stdev=22594.36, samples=20 00:28:13.762 iops : min= 50, max= 332, avg=167.45, stdev=88.28, samples=20 00:28:13.762 lat (msec) : 20=0.52%, 100=2.36%, 250=36.46%, 500=35.65%, 750=19.55% 00:28:13.762 lat (msec) : 1000=3.11%, 2000=2.36% 00:28:13.762 cpu : usr=0.05%, sys=0.70%, ctx=276, majf=0, minf=4097 00:28:13.762 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:28:13.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.762 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:13.762 issued rwts: total=1739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.762 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:13.762 00:28:13.762 Run status group 0 (all jobs): 00:28:13.762 READ: bw=771MiB/s (808MB/s), 42.9MiB/s-97.6MiB/s (44.9MB/s-102MB/s), io=7850MiB (8231MB), run=10024-10185msec 00:28:13.762 00:28:13.762 Disk stats (read/write): 00:28:13.762 nvme0n1: ios=7092/0, merge=0/0, ticks=1223522/0, in_queue=1223522, util=96.34% 00:28:13.762 nvme10n1: ios=6017/0, merge=0/0, ticks=1230996/0, in_queue=1230996, util=96.64% 00:28:13.762 nvme1n1: ios=7592/0, merge=0/0, ticks=1225632/0, in_queue=1225632, util=96.81% 00:28:13.762 nvme2n1: ios=3807/0, merge=0/0, ticks=1224573/0, in_queue=1224573, util=97.11% 00:28:13.762 nvme3n1: ios=4309/0, merge=0/0, ticks=1228804/0, in_queue=1228804, util=97.25% 00:28:13.762 nvme4n1: ios=4589/0, merge=0/0, ticks=1225987/0, in_queue=1225987, util=97.60% 00:28:13.762 nvme5n1: ios=6406/0, merge=0/0, ticks=1242711/0, in_queue=1242711, util=97.98% 00:28:13.762 nvme6n1: ios=6382/0, merge=0/0, ticks=1220359/0, in_queue=1220359, util=98.13% 00:28:13.762 nvme7n1: ios=4568/0, merge=0/0, ticks=1260462/0, in_queue=1260462, util=98.64% 00:28:13.762 nvme8n1: ios=7034/0, merge=0/0, ticks=1225360/0, in_queue=1225360, util=98.98% 00:28:13.762 nvme9n1: ios=3399/0, merge=0/0, ticks=1225312/0, in_queue=1225312, util=99.09% 00:28:13.762 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:28:13.762 [global] 00:28:13.762 thread=1 00:28:13.762 invalidate=1 00:28:13.762 rw=randwrite 00:28:13.762 time_based=1 00:28:13.762 runtime=10 00:28:13.762 ioengine=libaio 00:28:13.762 direct=1 00:28:13.762 bs=262144 00:28:13.762 iodepth=64 00:28:13.762 norandommap=1 00:28:13.762 numjobs=1 00:28:13.762 00:28:13.762 [job0] 00:28:13.762 filename=/dev/nvme0n1 00:28:13.762 [job1] 00:28:13.762 filename=/dev/nvme10n1 00:28:13.762 [job2] 00:28:13.762 filename=/dev/nvme1n1 00:28:13.762 [job3] 00:28:13.762 filename=/dev/nvme2n1 00:28:13.762 [job4] 00:28:13.762 filename=/dev/nvme3n1 00:28:13.762 [job5] 00:28:13.762 filename=/dev/nvme4n1 00:28:13.762 [job6] 00:28:13.762 filename=/dev/nvme5n1 00:28:13.762 [job7] 00:28:13.762 filename=/dev/nvme6n1 00:28:13.762 [job8] 00:28:13.762 filename=/dev/nvme7n1 00:28:13.762 [job9] 00:28:13.762 filename=/dev/nvme8n1 00:28:13.762 [job10] 00:28:13.762 filename=/dev/nvme9n1 00:28:13.762 Could not set queue depth (nvme0n1) 00:28:13.762 Could not set queue depth (nvme10n1) 00:28:13.762 Could not set queue depth (nvme1n1) 00:28:13.762 Could not set queue depth (nvme2n1) 00:28:13.762 Could not set queue depth (nvme3n1) 00:28:13.762 Could not set queue depth (nvme4n1) 00:28:13.762 Could not set queue depth (nvme5n1) 00:28:13.762 Could not set queue depth (nvme6n1) 00:28:13.762 Could not set queue depth (nvme7n1) 00:28:13.762 Could not set queue depth (nvme8n1) 00:28:13.762 Could not set queue depth (nvme9n1) 00:28:13.762 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:13.762 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:13.762 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:13.762 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:13.762 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:13.762 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:13.762 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:13.762 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:13.762 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:13.762 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:13.762 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:13.762 fio-3.35 00:28:13.762 Starting 11 threads 00:28:23.768 00:28:23.768 job0: (groupid=0, jobs=1): err= 0: pid=1811122: Sun Oct 13 14:24:26 2024 00:28:23.768 write: IOPS=540, BW=135MiB/s (142MB/s)(1370MiB/10150msec); 0 zone resets 00:28:23.768 slat (usec): min=23, max=18456, avg=1821.62, stdev=3476.95 00:28:23.768 clat (msec): min=13, max=340, avg=116.65, stdev=48.65 00:28:23.768 lat (msec): min=13, max=340, avg=118.47, stdev=49.27 00:28:23.768 clat percentiles (msec): 00:28:23.768 | 1.00th=[ 50], 5.00th=[ 53], 10.00th=[ 57], 20.00th=[ 81], 00:28:23.768 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 91], 60.00th=[ 132], 00:28:23.768 | 70.00th=[ 146], 80.00th=[ 157], 90.00th=[ 188], 95.00th=[ 201], 00:28:23.768 | 99.00th=[ 236], 99.50th=[ 271], 99.90th=[ 334], 99.95th=[ 334], 00:28:23.768 | 99.99th=[ 342] 00:28:23.768 bw ( KiB/s): min=71680, max=264721, per=9.91%, avg=138701.65, stdev=52496.48, samples=20 00:28:23.768 iops : min= 280, max= 1034, avg=541.80, stdev=205.06, samples=20 00:28:23.768 lat (msec) : 20=0.07%, 50=1.37%, 100=50.37%, 250=47.56%, 500=0.62% 00:28:23.768 cpu : usr=1.31%, sys=1.70%, ctx=1348, majf=0, minf=1 00:28:23.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:28:23.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.768 issued rwts: total=0,5481,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.768 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.768 job1: (groupid=0, jobs=1): err= 0: pid=1811130: Sun Oct 13 14:24:26 2024 00:28:23.768 write: IOPS=509, BW=127MiB/s (134MB/s)(1293MiB/10149msec); 0 zone resets 00:28:23.768 slat (usec): min=25, max=158409, avg=1899.21, stdev=4585.30 00:28:23.768 clat (msec): min=25, max=339, avg=123.56, stdev=45.39 00:28:23.768 lat (msec): min=25, max=339, avg=125.46, stdev=45.87 00:28:23.768 clat percentiles (msec): 00:28:23.768 | 1.00th=[ 40], 5.00th=[ 59], 10.00th=[ 69], 20.00th=[ 73], 00:28:23.768 | 30.00th=[ 96], 40.00th=[ 125], 50.00th=[ 132], 60.00th=[ 138], 00:28:23.768 | 70.00th=[ 144], 80.00th=[ 150], 90.00th=[ 171], 95.00th=[ 205], 00:28:23.768 | 99.00th=[ 247], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 330], 00:28:23.768 | 99.99th=[ 338] 00:28:23.768 bw ( KiB/s): min=71680, max=273408, per=9.35%, avg=130799.20, stdev=47886.54, samples=20 00:28:23.768 iops : min= 280, max= 1068, avg=510.90, stdev=187.09, samples=20 00:28:23.768 lat (msec) : 50=1.72%, 100=29.05%, 250=68.26%, 500=0.97% 00:28:23.768 cpu : usr=1.12%, sys=1.40%, ctx=1387, majf=0, minf=1 00:28:23.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:23.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.768 issued rwts: total=0,5173,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.768 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.768 job2: (groupid=0, jobs=1): err= 0: pid=1811142: Sun Oct 13 14:24:26 2024 00:28:23.768 write: IOPS=532, BW=133MiB/s (140MB/s)(1351MiB/10147msec); 0 zone resets 00:28:23.769 slat (usec): min=28, max=18620, avg=1833.36, stdev=3478.33 00:28:23.769 clat (msec): min=16, max=341, avg=118.24, stdev=47.71 00:28:23.769 lat (msec): min=16, max=341, avg=120.07, stdev=48.29 00:28:23.769 clat percentiles (msec): 00:28:23.769 | 1.00th=[ 56], 5.00th=[ 58], 10.00th=[ 64], 20.00th=[ 82], 00:28:23.769 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 94], 60.00th=[ 132], 00:28:23.769 | 70.00th=[ 146], 80.00th=[ 159], 90.00th=[ 188], 95.00th=[ 201], 00:28:23.769 | 99.00th=[ 236], 99.50th=[ 275], 99.90th=[ 334], 99.95th=[ 334], 00:28:23.769 | 99.99th=[ 342] 00:28:23.769 bw ( KiB/s): min=71680, max=241664, per=9.77%, avg=136755.20, stdev=49202.47, samples=20 00:28:23.769 iops : min= 280, max= 944, avg=534.20, stdev=192.20, samples=20 00:28:23.769 lat (msec) : 20=0.07%, 50=0.22%, 100=50.71%, 250=48.36%, 500=0.63% 00:28:23.769 cpu : usr=1.21%, sys=1.84%, ctx=1348, majf=0, minf=1 00:28:23.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:23.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.769 issued rwts: total=0,5405,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.769 job3: (groupid=0, jobs=1): err= 0: pid=1811149: Sun Oct 13 14:24:26 2024 00:28:23.769 write: IOPS=364, BW=91.2MiB/s (95.6MB/s)(922MiB/10103msec); 0 zone resets 00:28:23.769 slat (usec): min=25, max=31401, avg=2641.57, stdev=5267.74 00:28:23.769 clat (msec): min=12, max=317, avg=172.72, stdev=74.43 00:28:23.769 lat (msec): min=12, max=317, avg=175.36, stdev=75.46 00:28:23.769 clat percentiles (msec): 00:28:23.769 | 1.00th=[ 44], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 73], 00:28:23.769 | 30.00th=[ 114], 40.00th=[ 155], 50.00th=[ 203], 60.00th=[ 222], 00:28:23.769 | 70.00th=[ 232], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 259], 00:28:23.769 | 99.00th=[ 300], 99.50th=[ 309], 99.90th=[ 317], 99.95th=[ 317], 00:28:23.769 | 99.99th=[ 317] 00:28:23.769 bw ( KiB/s): min=59392, max=233472, per=6.63%, avg=92763.20, stdev=49380.13, samples=20 00:28:23.769 iops : min= 232, max= 912, avg=362.35, stdev=192.88, samples=20 00:28:23.769 lat (msec) : 20=0.14%, 50=1.14%, 100=25.91%, 250=58.27%, 500=14.54% 00:28:23.769 cpu : usr=0.85%, sys=1.23%, ctx=1014, majf=0, minf=1 00:28:23.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:28:23.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.769 issued rwts: total=0,3686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.769 job4: (groupid=0, jobs=1): err= 0: pid=1811153: Sun Oct 13 14:24:26 2024 00:28:23.769 write: IOPS=356, BW=89.1MiB/s (93.4MB/s)(901MiB/10106msec); 0 zone resets 00:28:23.769 slat (usec): min=24, max=51605, avg=2713.74, stdev=5377.33 00:28:23.769 clat (msec): min=8, max=301, avg=176.76, stdev=71.31 00:28:23.769 lat (msec): min=8, max=301, avg=179.47, stdev=72.29 00:28:23.769 clat percentiles (msec): 00:28:23.769 | 1.00th=[ 60], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 75], 00:28:23.769 | 30.00th=[ 142], 40.00th=[ 163], 50.00th=[ 209], 60.00th=[ 222], 00:28:23.769 | 70.00th=[ 232], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 255], 00:28:23.769 | 99.00th=[ 284], 99.50th=[ 296], 99.90th=[ 300], 99.95th=[ 300], 00:28:23.769 | 99.99th=[ 300] 00:28:23.769 bw ( KiB/s): min=63488, max=232960, per=6.47%, avg=90572.80, stdev=46939.95, samples=20 00:28:23.769 iops : min= 248, max= 910, avg=353.80, stdev=183.36, samples=20 00:28:23.769 lat (msec) : 10=0.06%, 20=0.17%, 50=0.33%, 100=25.15%, 250=62.77% 00:28:23.769 lat (msec) : 500=11.52% 00:28:23.769 cpu : usr=0.73%, sys=1.03%, ctx=958, majf=0, minf=1 00:28:23.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:28:23.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.769 issued rwts: total=0,3602,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.769 job5: (groupid=0, jobs=1): err= 0: pid=1811166: Sun Oct 13 14:24:26 2024 00:28:23.769 write: IOPS=496, BW=124MiB/s (130MB/s)(1261MiB/10149msec); 0 zone resets 00:28:23.769 slat (usec): min=27, max=129570, avg=1924.17, stdev=4146.08 00:28:23.769 clat (msec): min=8, max=342, avg=126.36, stdev=48.80 00:28:23.769 lat (msec): min=8, max=342, avg=128.28, stdev=49.39 00:28:23.769 clat percentiles (msec): 00:28:23.769 | 1.00th=[ 43], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 85], 00:28:23.769 | 30.00th=[ 86], 40.00th=[ 89], 50.00th=[ 125], 60.00th=[ 136], 00:28:23.769 | 70.00th=[ 153], 80.00th=[ 163], 90.00th=[ 192], 95.00th=[ 218], 00:28:23.769 | 99.00th=[ 266], 99.50th=[ 300], 99.90th=[ 334], 99.95th=[ 334], 00:28:23.769 | 99.99th=[ 342] 00:28:23.769 bw ( KiB/s): min=52736, max=195072, per=9.11%, avg=127462.40, stdev=44468.66, samples=20 00:28:23.769 iops : min= 206, max= 762, avg=497.90, stdev=173.71, samples=20 00:28:23.769 lat (msec) : 10=0.02%, 20=0.28%, 50=1.15%, 100=43.59%, 250=53.51% 00:28:23.769 lat (msec) : 500=1.45% 00:28:23.769 cpu : usr=1.01%, sys=1.55%, ctx=1403, majf=0, minf=1 00:28:23.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:23.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.769 issued rwts: total=0,5042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.769 job6: (groupid=0, jobs=1): err= 0: pid=1811168: Sun Oct 13 14:24:26 2024 00:28:23.769 write: IOPS=1040, BW=260MiB/s (273MB/s)(2614MiB/10046msec); 0 zone resets 00:28:23.769 slat (usec): min=23, max=95400, avg=943.55, stdev=2156.61 00:28:23.769 clat (msec): min=32, max=259, avg=60.29, stdev=24.17 00:28:23.769 lat (msec): min=39, max=259, avg=61.24, stdev=24.49 00:28:23.769 clat percentiles (msec): 00:28:23.769 | 1.00th=[ 43], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:28:23.769 | 30.00th=[ 51], 40.00th=[ 52], 50.00th=[ 53], 60.00th=[ 53], 00:28:23.769 | 70.00th=[ 55], 80.00th=[ 62], 90.00th=[ 89], 95.00th=[ 102], 00:28:23.769 | 99.00th=[ 190], 99.50th=[ 213], 99.90th=[ 243], 99.95th=[ 245], 00:28:23.769 | 99.99th=[ 259] 00:28:23.769 bw ( KiB/s): min=61952, max=338432, per=19.01%, avg=266035.20, stdev=78323.48, samples=20 00:28:23.769 iops : min= 242, max= 1322, avg=1039.20, stdev=305.95, samples=20 00:28:23.769 lat (msec) : 50=30.22%, 100=64.47%, 250=5.28%, 500=0.03% 00:28:23.769 cpu : usr=2.41%, sys=2.76%, ctx=2594, majf=0, minf=1 00:28:23.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:28:23.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.769 issued rwts: total=0,10455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.769 job7: (groupid=0, jobs=1): err= 0: pid=1811181: Sun Oct 13 14:24:26 2024 00:28:23.769 write: IOPS=416, BW=104MiB/s (109MB/s)(1052MiB/10107msec); 0 zone resets 00:28:23.769 slat (usec): min=23, max=62337, avg=2213.09, stdev=4809.10 00:28:23.769 clat (msec): min=4, max=303, avg=151.45, stdev=76.19 00:28:23.769 lat (msec): min=4, max=311, avg=153.66, stdev=77.23 00:28:23.769 clat percentiles (msec): 00:28:23.769 | 1.00th=[ 11], 5.00th=[ 59], 10.00th=[ 74], 20.00th=[ 84], 00:28:23.769 | 30.00th=[ 90], 40.00th=[ 99], 50.00th=[ 133], 60.00th=[ 169], 00:28:23.769 | 70.00th=[ 226], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 257], 00:28:23.769 | 99.00th=[ 284], 99.50th=[ 288], 99.90th=[ 300], 99.95th=[ 300], 00:28:23.769 | 99.99th=[ 305] 00:28:23.769 bw ( KiB/s): min=62464, max=211968, per=7.58%, avg=106060.80, stdev=53159.38, samples=20 00:28:23.769 iops : min= 244, max= 828, avg=414.30, stdev=207.65, samples=20 00:28:23.769 lat (msec) : 10=0.86%, 20=1.09%, 50=2.31%, 100=38.58%, 250=46.09% 00:28:23.769 lat (msec) : 500=11.08% 00:28:23.769 cpu : usr=0.85%, sys=1.32%, ctx=1345, majf=0, minf=1 00:28:23.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:28:23.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.769 issued rwts: total=0,4207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.769 job8: (groupid=0, jobs=1): err= 0: pid=1811210: Sun Oct 13 14:24:26 2024 00:28:23.769 write: IOPS=409, BW=102MiB/s (107MB/s)(1040MiB/10146msec); 0 zone resets 00:28:23.769 slat (usec): min=22, max=160581, avg=2195.51, stdev=6467.14 00:28:23.769 clat (msec): min=7, max=350, avg=153.88, stdev=54.49 00:28:23.769 lat (msec): min=7, max=350, avg=156.07, stdev=55.02 00:28:23.769 clat percentiles (msec): 00:28:23.769 | 1.00th=[ 19], 5.00th=[ 46], 10.00th=[ 84], 20.00th=[ 122], 00:28:23.769 | 30.00th=[ 131], 40.00th=[ 142], 50.00th=[ 148], 60.00th=[ 165], 00:28:23.769 | 70.00th=[ 188], 80.00th=[ 201], 90.00th=[ 220], 95.00th=[ 230], 00:28:23.769 | 99.00th=[ 288], 99.50th=[ 313], 99.90th=[ 338], 99.95th=[ 347], 00:28:23.769 | 99.99th=[ 351] 00:28:23.769 bw ( KiB/s): min=73728, max=154112, per=7.49%, avg=104842.80, stdev=20256.19, samples=20 00:28:23.769 iops : min= 288, max= 602, avg=409.50, stdev=79.12, samples=20 00:28:23.769 lat (msec) : 10=0.07%, 20=1.13%, 50=4.21%, 100=7.09%, 250=84.75% 00:28:23.769 lat (msec) : 500=2.74% 00:28:23.769 cpu : usr=0.90%, sys=1.27%, ctx=1327, majf=0, minf=1 00:28:23.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:28:23.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.769 issued rwts: total=0,4158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.769 job9: (groupid=0, jobs=1): err= 0: pid=1811224: Sun Oct 13 14:24:26 2024 00:28:23.769 write: IOPS=320, BW=80.0MiB/s (83.9MB/s)(807MiB/10080msec); 0 zone resets 00:28:23.769 slat (usec): min=22, max=81769, avg=2946.72, stdev=5973.48 00:28:23.769 clat (msec): min=2, max=342, avg=196.88, stdev=63.01 00:28:23.769 lat (msec): min=2, max=342, avg=199.82, stdev=63.80 00:28:23.769 clat percentiles (msec): 00:28:23.769 | 1.00th=[ 7], 5.00th=[ 57], 10.00th=[ 111], 20.00th=[ 153], 00:28:23.769 | 30.00th=[ 194], 40.00th=[ 205], 50.00th=[ 213], 60.00th=[ 222], 00:28:23.769 | 70.00th=[ 228], 80.00th=[ 239], 90.00th=[ 255], 95.00th=[ 266], 00:28:23.769 | 99.00th=[ 326], 99.50th=[ 330], 99.90th=[ 334], 99.95th=[ 342], 00:28:23.769 | 99.99th=[ 342] 00:28:23.769 bw ( KiB/s): min=59392, max=140056, per=5.79%, avg=80986.80, stdev=18904.43, samples=20 00:28:23.769 iops : min= 232, max= 547, avg=316.35, stdev=73.83, samples=20 00:28:23.769 lat (msec) : 4=0.09%, 10=2.63%, 20=0.99%, 50=1.08%, 100=2.97% 00:28:23.769 lat (msec) : 250=80.35%, 500=11.87% 00:28:23.769 cpu : usr=0.73%, sys=0.91%, ctx=993, majf=0, minf=2 00:28:23.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:28:23.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.770 issued rwts: total=0,3227,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.770 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.770 job10: (groupid=0, jobs=1): err= 0: pid=1811235: Sun Oct 13 14:24:26 2024 00:28:23.770 write: IOPS=501, BW=125MiB/s (131MB/s)(1262MiB/10073msec); 0 zone resets 00:28:23.770 slat (usec): min=24, max=94638, avg=1811.65, stdev=4099.02 00:28:23.770 clat (msec): min=2, max=339, avg=125.83, stdev=56.17 00:28:23.770 lat (msec): min=2, max=343, avg=127.64, stdev=56.81 00:28:23.770 clat percentiles (msec): 00:28:23.770 | 1.00th=[ 12], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 73], 00:28:23.770 | 30.00th=[ 78], 40.00th=[ 112], 50.00th=[ 127], 60.00th=[ 140], 00:28:23.770 | 70.00th=[ 148], 80.00th=[ 169], 90.00th=[ 207], 95.00th=[ 222], 00:28:23.770 | 99.00th=[ 292], 99.50th=[ 313], 99.90th=[ 334], 99.95th=[ 334], 00:28:23.770 | 99.99th=[ 338] 00:28:23.770 bw ( KiB/s): min=73728, max=249856, per=9.12%, avg=127616.00, stdev=48891.34, samples=20 00:28:23.770 iops : min= 288, max= 976, avg=498.50, stdev=190.98, samples=20 00:28:23.770 lat (msec) : 4=0.14%, 10=0.83%, 20=0.57%, 50=2.02%, 100=32.90% 00:28:23.770 lat (msec) : 250=61.61%, 500=1.92% 00:28:23.770 cpu : usr=1.14%, sys=1.47%, ctx=1594, majf=0, minf=1 00:28:23.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:23.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:23.770 issued rwts: total=0,5048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.770 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:23.770 00:28:23.770 Run status group 0 (all jobs): 00:28:23.770 WRITE: bw=1367MiB/s (1433MB/s), 80.0MiB/s-260MiB/s (83.9MB/s-273MB/s), io=13.5GiB (14.5GB), run=10046-10150msec 00:28:23.770 00:28:23.770 Disk stats (read/write): 00:28:23.770 nvme0n1: ios=49/10880, merge=0/0, ticks=53/1218960, in_queue=1219013, util=96.64% 00:28:23.770 nvme10n1: ios=49/10262, merge=0/0, ticks=5800/1199416, in_queue=1205216, util=100.00% 00:28:23.770 nvme1n1: ios=43/10732, merge=0/0, ticks=822/1218358, in_queue=1219180, util=100.00% 00:28:23.770 nvme2n1: ios=0/7334, merge=0/0, ticks=0/1226808, in_queue=1226808, util=97.16% 00:28:23.770 nvme3n1: ios=44/7160, merge=0/0, ticks=1885/1227118, in_queue=1229003, util=100.00% 00:28:23.770 nvme4n1: ios=48/10006, merge=0/0, ticks=1299/1215455, in_queue=1216754, util=100.00% 00:28:23.770 nvme5n1: ios=46/20372, merge=0/0, ticks=2089/1191198, in_queue=1193287, util=100.00% 00:28:23.770 nvme6n1: ios=44/8371, merge=0/0, ticks=2105/1227710, in_queue=1229815, util=100.00% 00:28:23.770 nvme7n1: ios=48/8239, merge=0/0, ticks=6285/1171769, in_queue=1178054, util=100.00% 00:28:23.770 nvme8n1: ios=42/6225, merge=0/0, ticks=1828/1191612, in_queue=1193440, util=100.00% 00:28:23.770 nvme9n1: ios=42/9868, merge=0/0, ticks=2390/1189252, in_queue=1191642, util=99.87% 00:28:23.770 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:28:23.770 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:28:23.770 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:23.770 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:23.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:23.770 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:28:23.770 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:23.770 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:23.770 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:28:23.770 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:23.770 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:28:23.770 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:23.770 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:23.770 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.770 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:23.770 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.770 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:23.770 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:28:24.031 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:28:24.031 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:28:24.031 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:24.031 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:24.031 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:28:24.031 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:24.031 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:28:24.031 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:24.031 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:24.031 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.031 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:24.031 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.031 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:24.031 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:28:24.292 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:28:24.292 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:28:24.292 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:24.292 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:24.292 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:28:24.292 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:24.292 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:28:24.292 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:24.292 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:24.292 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.292 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:24.292 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.552 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:24.552 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:28:24.813 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:28:24.813 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:28:24.813 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:24.813 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:24.813 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:28:24.813 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:24.813 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:28:24.813 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:24.813 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:28:24.813 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.813 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:24.813 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.813 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:24.813 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:28:24.813 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:28:24.813 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:28:24.813 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:24.813 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:24.813 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:28:25.073 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:25.073 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:28:25.073 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:25.073 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:28:25.073 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.073 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:25.073 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.073 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:25.073 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:28:25.333 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:28:25.333 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:28:25.333 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:25.333 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:25.333 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:28:25.333 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:25.333 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:28:25.333 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:25.333 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:28:25.333 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.333 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:25.333 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.333 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:25.333 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:28:25.333 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:28:25.333 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:28:25.333 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:25.333 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:25.333 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:28:25.333 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:25.333 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:28:25.593 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:25.593 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:28:25.593 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.593 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:25.593 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.593 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:25.593 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:28:25.593 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:28:25.593 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:28:25.593 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:25.593 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:25.593 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:28:25.593 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:25.593 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:28:25.593 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:25.593 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:28:25.593 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.593 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:28:25.854 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:28:25.854 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:25.854 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:28:26.115 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:26.115 rmmod nvme_tcp 00:28:26.115 rmmod nvme_fabrics 00:28:26.115 rmmod nvme_keyring 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@515 -- # '[' -n 1800435 ']' 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # killprocess 1800435 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 1800435 ']' 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 1800435 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:26.115 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1800435 00:28:26.376 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:26.376 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:26.376 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1800435' 00:28:26.376 killing process with pid 1800435 00:28:26.376 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 1800435 00:28:26.376 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 1800435 00:28:26.636 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:26.636 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:26.636 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:26.636 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:28:26.636 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-save 00:28:26.636 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:26.636 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-restore 00:28:26.636 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:26.636 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:26.636 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.636 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.636 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.547 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:28.547 00:28:28.547 real 1m17.963s 00:28:28.547 user 4m58.694s 00:28:28.547 sys 0m17.459s 00:28:28.547 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:28.547 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:28.547 ************************************ 00:28:28.547 END TEST nvmf_multiconnection 00:28:28.547 ************************************ 00:28:28.547 14:24:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:28.547 14:24:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:28.547 14:24:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:28.547 14:24:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:28.547 ************************************ 00:28:28.547 START TEST nvmf_initiator_timeout 00:28:28.547 ************************************ 00:28:28.547 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:28.809 * Looking for test storage... 00:28:28.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:28.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.809 --rc genhtml_branch_coverage=1 00:28:28.809 --rc genhtml_function_coverage=1 00:28:28.809 --rc genhtml_legend=1 00:28:28.809 --rc geninfo_all_blocks=1 00:28:28.809 --rc geninfo_unexecuted_blocks=1 00:28:28.809 00:28:28.809 ' 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:28.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.809 --rc genhtml_branch_coverage=1 00:28:28.809 --rc genhtml_function_coverage=1 00:28:28.809 --rc genhtml_legend=1 00:28:28.809 --rc geninfo_all_blocks=1 00:28:28.809 --rc geninfo_unexecuted_blocks=1 00:28:28.809 00:28:28.809 ' 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:28.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.809 --rc genhtml_branch_coverage=1 00:28:28.809 --rc genhtml_function_coverage=1 00:28:28.809 --rc genhtml_legend=1 00:28:28.809 --rc geninfo_all_blocks=1 00:28:28.809 --rc geninfo_unexecuted_blocks=1 00:28:28.809 00:28:28.809 ' 00:28:28.809 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:28.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.809 --rc genhtml_branch_coverage=1 00:28:28.809 --rc genhtml_function_coverage=1 00:28:28.809 --rc genhtml_legend=1 00:28:28.809 --rc geninfo_all_blocks=1 00:28:28.809 --rc geninfo_unexecuted_blocks=1 00:28:28.809 00:28:28.809 ' 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:28.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:28:28.810 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:36.952 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:36.953 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:36.953 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:36.953 Found net devices under 0000:31:00.0: cvl_0_0 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:36.953 Found net devices under 0000:31:00.1: cvl_0_1 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # is_hw=yes 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:36.953 14:24:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:36.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:28:36.953 00:28:36.953 --- 10.0.0.2 ping statistics --- 00:28:36.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.953 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:28:36.953 00:28:36.953 --- 10.0.0.1 ping statistics --- 00:28:36.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.953 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # return 0 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # nvmfpid=1817828 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # waitforlisten 1817828 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 1817828 ']' 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:36.953 14:24:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:36.953 [2024-10-13 14:24:40.239866] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:28:36.953 [2024-10-13 14:24:40.239934] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.953 [2024-10-13 14:24:40.382716] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:36.953 [2024-10-13 14:24:40.431604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:36.953 [2024-10-13 14:24:40.460047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.953 [2024-10-13 14:24:40.460096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.953 [2024-10-13 14:24:40.460104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.953 [2024-10-13 14:24:40.460111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.953 [2024-10-13 14:24:40.460118] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.954 [2024-10-13 14:24:40.462097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.954 [2024-10-13 14:24:40.462195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:36.954 [2024-10-13 14:24:40.462502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:36.954 [2024-10-13 14:24:40.462505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:37.536 Malloc0 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:37.536 Delay0 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:37.536 [2024-10-13 14:24:41.156014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:37.536 [2024-10-13 14:24:41.196358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.536 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:39.450 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:28:39.450 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:28:39.450 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:39.450 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:39.450 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:28:41.383 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:41.383 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:41.383 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:28:41.383 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:41.383 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:41.383 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:28:41.383 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1818847 00:28:41.383 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:28:41.383 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:28:41.383 [global] 00:28:41.383 thread=1 00:28:41.383 invalidate=1 00:28:41.383 rw=write 00:28:41.383 time_based=1 00:28:41.383 runtime=60 00:28:41.383 ioengine=libaio 00:28:41.383 direct=1 00:28:41.383 bs=4096 00:28:41.383 iodepth=1 00:28:41.383 norandommap=0 00:28:41.383 numjobs=1 00:28:41.383 00:28:41.383 verify_dump=1 00:28:41.383 verify_backlog=512 00:28:41.383 verify_state_save=0 00:28:41.383 do_verify=1 00:28:41.383 verify=crc32c-intel 00:28:41.383 [job0] 00:28:41.383 filename=/dev/nvme0n1 00:28:41.383 Could not set queue depth (nvme0n1) 00:28:41.647 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:41.647 fio-3.35 00:28:41.647 Starting 1 thread 00:28:44.189 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:28:44.189 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.189 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:44.189 true 00:28:44.189 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.189 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:28:44.189 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.189 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:44.189 true 00:28:44.189 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.190 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:28:44.190 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.190 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:44.190 true 00:28:44.190 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.190 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:28:44.190 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.190 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:44.190 true 00:28:44.190 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.190 14:24:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:28:47.485 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:28:47.485 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.485 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:47.485 true 00:28:47.485 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.485 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:28:47.485 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.485 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:47.485 true 00:28:47.485 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.485 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:28:47.485 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.485 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:47.485 true 00:28:47.485 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.485 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:28:47.485 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.485 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:47.485 true 00:28:47.485 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.485 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:28:47.485 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1818847 00:29:43.745 00:29:43.746 job0: (groupid=0, jobs=1): err= 0: pid=1819014: Sun Oct 13 14:25:45 2024 00:29:43.746 read: IOPS=205, BW=823KiB/s (843kB/s)(48.2MiB/60001msec) 00:29:43.746 slat (usec): min=6, max=9600, avg=25.96, stdev=86.39 00:29:43.746 clat (usec): min=264, max=42157k, avg=4373.21, stdev=379352.59 00:29:43.746 lat (usec): min=290, max=42157k, avg=4399.17, stdev=379352.68 00:29:43.746 clat percentiles (usec): 00:29:43.746 | 1.00th=[ 412], 5.00th=[ 515], 10.00th=[ 570], 20.00th=[ 627], 00:29:43.746 | 30.00th=[ 676], 40.00th=[ 742], 50.00th=[ 783], 60.00th=[ 824], 00:29:43.746 | 70.00th=[ 857], 80.00th=[ 873], 90.00th=[ 906], 95.00th=[ 979], 00:29:43.746 | 99.00th=[ 1106], 99.50th=[ 1827], 99.90th=[42206], 99.95th=[42730], 00:29:43.746 | 99.99th=[43254] 00:29:43.746 write: IOPS=213, BW=853KiB/s (874kB/s)(50.0MiB/60001msec); 0 zone resets 00:29:43.746 slat (usec): min=9, max=30404, avg=33.48, stdev=270.08 00:29:43.746 clat (usec): min=27, max=922, avg=394.06, stdev=116.68 00:29:43.746 lat (usec): min=145, max=31090, avg=427.54, stdev=297.33 00:29:43.746 clat percentiles (usec): 00:29:43.746 | 1.00th=[ 192], 5.00th=[ 217], 10.00th=[ 269], 20.00th=[ 297], 00:29:43.746 | 30.00th=[ 314], 40.00th=[ 334], 50.00th=[ 396], 60.00th=[ 412], 00:29:43.746 | 70.00th=[ 441], 80.00th=[ 519], 90.00th=[ 553], 95.00th=[ 594], 00:29:43.746 | 99.00th=[ 693], 99.50th=[ 717], 99.90th=[ 799], 99.95th=[ 857], 00:29:43.746 | 99.99th=[ 922] 00:29:43.746 bw ( KiB/s): min= 1328, max= 4096, per=100.00%, avg=3276.80, stdev=934.50, samples=30 00:29:43.746 iops : min= 332, max= 1024, avg=819.20, stdev=233.63, samples=30 00:29:43.746 lat (usec) : 50=0.01%, 250=4.52%, 500=37.03%, 750=29.59%, 1000=26.70% 00:29:43.746 lat (msec) : 2=1.90%, 50=0.24%, >=2000=0.01% 00:29:43.746 cpu : usr=0.62%, sys=1.23%, ctx=25157, majf=0, minf=35 00:29:43.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:43.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.746 issued rwts: total=12350,12800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:43.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:43.746 00:29:43.746 Run status group 0 (all jobs): 00:29:43.746 READ: bw=823KiB/s (843kB/s), 823KiB/s-823KiB/s (843kB/s-843kB/s), io=48.2MiB (50.6MB), run=60001-60001msec 00:29:43.746 WRITE: bw=853KiB/s (874kB/s), 853KiB/s-853KiB/s (874kB/s-874kB/s), io=50.0MiB (52.4MB), run=60001-60001msec 00:29:43.746 00:29:43.746 Disk stats (read/write): 00:29:43.746 nvme0n1: ios=12348/12666, merge=0/0, ticks=11740/4891, in_queue=16631, util=99.94% 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:43.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:29:43.746 nvmf hotplug test: fio successful as expected 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:43.746 rmmod nvme_tcp 00:29:43.746 rmmod nvme_fabrics 00:29:43.746 rmmod nvme_keyring 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@515 -- # '[' -n 1817828 ']' 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # killprocess 1817828 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 1817828 ']' 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 1817828 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1817828 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1817828' 00:29:43.746 killing process with pid 1817828 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 1817828 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 1817828 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-save 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-restore 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.746 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.318 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:44.318 00:29:44.318 real 1m15.565s 00:29:44.318 user 4m36.492s 00:29:44.318 sys 0m8.519s 00:29:44.318 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:44.318 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:44.318 ************************************ 00:29:44.318 END TEST nvmf_initiator_timeout 00:29:44.318 ************************************ 00:29:44.318 14:25:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:29:44.318 14:25:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:29:44.318 14:25:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:29:44.318 14:25:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:29:44.318 14:25:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.557 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:52.558 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:52.558 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:52.558 Found net devices under 0000:31:00.0: cvl_0_0 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:52.558 Found net devices under 0000:31:00.1: cvl_0_1 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:52.558 ************************************ 00:29:52.558 START TEST nvmf_perf_adq 00:29:52.558 ************************************ 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:52.558 * Looking for test storage... 00:29:52.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:52.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.558 --rc genhtml_branch_coverage=1 00:29:52.558 --rc genhtml_function_coverage=1 00:29:52.558 --rc genhtml_legend=1 00:29:52.558 --rc geninfo_all_blocks=1 00:29:52.558 --rc geninfo_unexecuted_blocks=1 00:29:52.558 00:29:52.558 ' 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:52.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.558 --rc genhtml_branch_coverage=1 00:29:52.558 --rc genhtml_function_coverage=1 00:29:52.558 --rc genhtml_legend=1 00:29:52.558 --rc geninfo_all_blocks=1 00:29:52.558 --rc geninfo_unexecuted_blocks=1 00:29:52.558 00:29:52.558 ' 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:52.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.558 --rc genhtml_branch_coverage=1 00:29:52.558 --rc genhtml_function_coverage=1 00:29:52.558 --rc genhtml_legend=1 00:29:52.558 --rc geninfo_all_blocks=1 00:29:52.558 --rc geninfo_unexecuted_blocks=1 00:29:52.558 00:29:52.558 ' 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:52.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.558 --rc genhtml_branch_coverage=1 00:29:52.558 --rc genhtml_function_coverage=1 00:29:52.558 --rc genhtml_legend=1 00:29:52.558 --rc geninfo_all_blocks=1 00:29:52.558 --rc geninfo_unexecuted_blocks=1 00:29:52.558 00:29:52.558 ' 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.558 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:52.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:52.559 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:59.143 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:59.143 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:59.143 Found net devices under 0000:31:00.0: cvl_0_0 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:59.143 Found net devices under 0000:31:00.1: cvl_0_1 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:59.143 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:29:59.144 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:29:59.144 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:30:01.057 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:30:03.601 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:08.893 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:08.893 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:08.893 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:08.894 Found net devices under 0000:31:00.0: cvl_0_0 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:08.894 Found net devices under 0000:31:00.1: cvl_0_1 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:08.894 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:08.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:30:08.894 00:30:08.894 --- 10.0.0.2 ping statistics --- 00:30:08.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.894 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:30:08.894 00:30:08.894 --- 10.0.0.1 ping statistics --- 00:30:08.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.894 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1840142 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1840142 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1840142 ']' 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:08.894 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:08.894 [2024-10-13 14:26:12.196685] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:30:08.894 [2024-10-13 14:26:12.196750] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.894 [2024-10-13 14:26:12.338779] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:08.894 [2024-10-13 14:26:12.387790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:08.894 [2024-10-13 14:26:12.415918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.894 [2024-10-13 14:26:12.415963] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.894 [2024-10-13 14:26:12.415972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:08.894 [2024-10-13 14:26:12.415979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:08.894 [2024-10-13 14:26:12.415985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.894 [2024-10-13 14:26:12.417926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.894 [2024-10-13 14:26:12.418105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:08.894 [2024-10-13 14:26:12.418195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.894 [2024-10-13 14:26:12.418194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.467 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:09.728 [2024-10-13 14:26:13.207476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:09.728 Malloc1 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:09.728 [2024-10-13 14:26:13.285534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1840417 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:30:09.728 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:11.650 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:30:11.650 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.650 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:11.651 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.651 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:30:11.651 "tick_rate": 2394400000, 00:30:11.651 "poll_groups": [ 00:30:11.651 { 00:30:11.651 "name": "nvmf_tgt_poll_group_000", 00:30:11.651 "admin_qpairs": 1, 00:30:11.651 "io_qpairs": 1, 00:30:11.651 "current_admin_qpairs": 1, 00:30:11.651 "current_io_qpairs": 1, 00:30:11.651 "pending_bdev_io": 0, 00:30:11.651 "completed_nvme_io": 16020, 00:30:11.651 "transports": [ 00:30:11.651 { 00:30:11.651 "trtype": "TCP" 00:30:11.651 } 00:30:11.651 ] 00:30:11.651 }, 00:30:11.651 { 00:30:11.651 "name": "nvmf_tgt_poll_group_001", 00:30:11.651 "admin_qpairs": 0, 00:30:11.651 "io_qpairs": 1, 00:30:11.651 "current_admin_qpairs": 0, 00:30:11.651 "current_io_qpairs": 1, 00:30:11.651 "pending_bdev_io": 0, 00:30:11.651 "completed_nvme_io": 17590, 00:30:11.651 "transports": [ 00:30:11.651 { 00:30:11.651 "trtype": "TCP" 00:30:11.651 } 00:30:11.651 ] 00:30:11.651 }, 00:30:11.651 { 00:30:11.651 "name": "nvmf_tgt_poll_group_002", 00:30:11.651 "admin_qpairs": 0, 00:30:11.651 "io_qpairs": 1, 00:30:11.651 "current_admin_qpairs": 0, 00:30:11.651 "current_io_qpairs": 1, 00:30:11.651 "pending_bdev_io": 0, 00:30:11.651 "completed_nvme_io": 17082, 00:30:11.651 "transports": [ 00:30:11.651 { 00:30:11.651 "trtype": "TCP" 00:30:11.651 } 00:30:11.651 ] 00:30:11.651 }, 00:30:11.651 { 00:30:11.651 "name": "nvmf_tgt_poll_group_003", 00:30:11.651 "admin_qpairs": 0, 00:30:11.651 "io_qpairs": 1, 00:30:11.651 "current_admin_qpairs": 0, 00:30:11.651 "current_io_qpairs": 1, 00:30:11.651 "pending_bdev_io": 0, 00:30:11.651 "completed_nvme_io": 15443, 00:30:11.651 "transports": [ 00:30:11.651 { 00:30:11.651 "trtype": "TCP" 00:30:11.651 } 00:30:11.651 ] 00:30:11.651 } 00:30:11.651 ] 00:30:11.651 }' 00:30:11.651 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:30:11.651 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:30:11.911 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:30:11.911 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:30:11.911 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1840417 00:30:20.050 Initializing NVMe Controllers 00:30:20.050 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:20.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:30:20.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:30:20.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:30:20.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:30:20.051 Initialization complete. Launching workers. 00:30:20.051 ======================================================== 00:30:20.051 Latency(us) 00:30:20.051 Device Information : IOPS MiB/s Average min max 00:30:20.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13189.33 51.52 4852.76 1429.86 12776.70 00:30:20.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13551.72 52.94 4722.59 1317.75 12192.74 00:30:20.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13581.82 53.05 4716.45 1253.19 42122.07 00:30:20.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12178.16 47.57 5254.22 1132.17 13167.38 00:30:20.051 ======================================================== 00:30:20.051 Total : 52501.02 205.08 4877.02 1132.17 42122.07 00:30:20.051 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:20.051 rmmod nvme_tcp 00:30:20.051 rmmod nvme_fabrics 00:30:20.051 rmmod nvme_keyring 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1840142 ']' 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1840142 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1840142 ']' 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1840142 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1840142 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1840142' 00:30:20.051 killing process with pid 1840142 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1840142 00:30:20.051 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1840142 00:30:20.312 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:20.312 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:20.312 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:20.312 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:30:20.312 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:30:20.312 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:20.312 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:30:20.312 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:20.312 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:20.312 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.312 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.312 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:22.227 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:22.227 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:30:22.227 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:30:22.227 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:30:24.140 14:26:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:30:26.052 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:30:31.344 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:30:31.344 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:31.344 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.344 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:31.344 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:31.344 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:31.344 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.344 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.344 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.344 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:31.344 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:31.344 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:30:31.344 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:31.344 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:31.344 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:30:31.344 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:31.344 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:31.344 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:31.344 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:31.345 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:31.345 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:31.345 Found net devices under 0000:31:00.0: cvl_0_0 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:31.345 Found net devices under 0000:31:00.1: cvl_0_1 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:31.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:30:31.345 00:30:31.345 --- 10.0.0.2 ping statistics --- 00:30:31.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.345 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:30:31.345 00:30:31.345 --- 10.0.0.1 ping statistics --- 00:30:31.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.345 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:31.345 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:31.345 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:30:31.345 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:30:31.345 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:30:31.346 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:30:31.346 net.core.busy_poll = 1 00:30:31.346 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:30:31.346 net.core.busy_read = 1 00:30:31.607 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:30:31.607 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:30:31.607 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:30:31.607 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:30:31.607 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:30:31.607 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:31.607 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:31.607 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:31.607 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:31.868 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1844947 00:30:31.868 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1844947 00:30:31.868 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:31.868 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1844947 ']' 00:30:31.868 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.868 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:31.868 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.868 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:31.868 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:31.868 [2024-10-13 14:26:35.378368] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:30:31.868 [2024-10-13 14:26:35.378434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:31.868 [2024-10-13 14:26:35.520533] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:31.868 [2024-10-13 14:26:35.568301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:32.129 [2024-10-13 14:26:35.596613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.129 [2024-10-13 14:26:35.596658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.129 [2024-10-13 14:26:35.596667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.129 [2024-10-13 14:26:35.596675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.129 [2024-10-13 14:26:35.596682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.129 [2024-10-13 14:26:35.599007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.129 [2024-10-13 14:26:35.599128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:32.129 [2024-10-13 14:26:35.599223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.129 [2024-10-13 14:26:35.599223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:32.702 [2024-10-13 14:26:36.392923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.702 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:32.963 Malloc1 00:30:32.963 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.963 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:32.963 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.963 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:32.963 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.963 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:32.963 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.963 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:32.963 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.963 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.963 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.963 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:32.963 [2024-10-13 14:26:36.467198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.963 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.963 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1845138 00:30:32.963 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:30:32.963 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:34.897 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:30:34.897 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.897 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:34.897 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.897 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:30:34.897 "tick_rate": 2394400000, 00:30:34.897 "poll_groups": [ 00:30:34.897 { 00:30:34.897 "name": "nvmf_tgt_poll_group_000", 00:30:34.897 "admin_qpairs": 1, 00:30:34.897 "io_qpairs": 2, 00:30:34.897 "current_admin_qpairs": 1, 00:30:34.897 "current_io_qpairs": 2, 00:30:34.897 "pending_bdev_io": 0, 00:30:34.897 "completed_nvme_io": 25583, 00:30:34.897 "transports": [ 00:30:34.897 { 00:30:34.897 "trtype": "TCP" 00:30:34.897 } 00:30:34.897 ] 00:30:34.897 }, 00:30:34.897 { 00:30:34.897 "name": "nvmf_tgt_poll_group_001", 00:30:34.897 "admin_qpairs": 0, 00:30:34.897 "io_qpairs": 2, 00:30:34.897 "current_admin_qpairs": 0, 00:30:34.897 "current_io_qpairs": 2, 00:30:34.897 "pending_bdev_io": 0, 00:30:34.897 "completed_nvme_io": 24711, 00:30:34.897 "transports": [ 00:30:34.897 { 00:30:34.897 "trtype": "TCP" 00:30:34.897 } 00:30:34.897 ] 00:30:34.897 }, 00:30:34.897 { 00:30:34.897 "name": "nvmf_tgt_poll_group_002", 00:30:34.897 "admin_qpairs": 0, 00:30:34.897 "io_qpairs": 0, 00:30:34.897 "current_admin_qpairs": 0, 00:30:34.897 "current_io_qpairs": 0, 00:30:34.897 "pending_bdev_io": 0, 00:30:34.897 "completed_nvme_io": 0, 00:30:34.897 "transports": [ 00:30:34.897 { 00:30:34.897 "trtype": "TCP" 00:30:34.897 } 00:30:34.897 ] 00:30:34.897 }, 00:30:34.897 { 00:30:34.897 "name": "nvmf_tgt_poll_group_003", 00:30:34.897 "admin_qpairs": 0, 00:30:34.897 "io_qpairs": 0, 00:30:34.897 "current_admin_qpairs": 0, 00:30:34.897 "current_io_qpairs": 0, 00:30:34.897 "pending_bdev_io": 0, 00:30:34.897 "completed_nvme_io": 0, 00:30:34.897 "transports": [ 00:30:34.897 { 00:30:34.897 "trtype": "TCP" 00:30:34.897 } 00:30:34.897 ] 00:30:34.897 } 00:30:34.897 ] 00:30:34.897 }' 00:30:34.897 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:30:34.897 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:30:34.897 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:30:34.897 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:30:34.897 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1845138 00:30:43.035 Initializing NVMe Controllers 00:30:43.035 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:43.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:30:43.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:30:43.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:30:43.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:30:43.035 Initialization complete. Launching workers. 00:30:43.035 ======================================================== 00:30:43.035 Latency(us) 00:30:43.035 Device Information : IOPS MiB/s Average min max 00:30:43.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9826.70 38.39 6514.15 974.71 54262.08 00:30:43.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7441.70 29.07 8599.31 1392.81 52686.82 00:30:43.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8590.80 33.56 7449.45 1337.86 53926.23 00:30:43.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12473.30 48.72 5130.22 1127.34 52905.61 00:30:43.035 ======================================================== 00:30:43.035 Total : 38332.50 149.74 6678.23 974.71 54262.08 00:30:43.035 00:30:43.035 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:30:43.035 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:43.035 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:30:43.035 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:43.035 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:30:43.035 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:43.035 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:43.035 rmmod nvme_tcp 00:30:43.035 rmmod nvme_fabrics 00:30:43.296 rmmod nvme_keyring 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1844947 ']' 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1844947 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1844947 ']' 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1844947 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1844947 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1844947' 00:30:43.296 killing process with pid 1844947 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1844947 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1844947 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.296 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.599 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:46.599 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:30:46.599 00:30:46.599 real 0m54.813s 00:30:46.599 user 2m49.652s 00:30:46.599 sys 0m11.697s 00:30:46.599 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:46.599 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:46.599 ************************************ 00:30:46.599 END TEST nvmf_perf_adq 00:30:46.599 ************************************ 00:30:46.599 14:26:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:46.599 14:26:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:46.599 14:26:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:46.599 14:26:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:46.599 ************************************ 00:30:46.599 START TEST nvmf_shutdown 00:30:46.599 ************************************ 00:30:46.599 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:46.599 * Looking for test storage... 00:30:46.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:46.599 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:46.599 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:30:46.599 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:46.860 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:46.860 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:46.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.861 --rc genhtml_branch_coverage=1 00:30:46.861 --rc genhtml_function_coverage=1 00:30:46.861 --rc genhtml_legend=1 00:30:46.861 --rc geninfo_all_blocks=1 00:30:46.861 --rc geninfo_unexecuted_blocks=1 00:30:46.861 00:30:46.861 ' 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:46.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.861 --rc genhtml_branch_coverage=1 00:30:46.861 --rc genhtml_function_coverage=1 00:30:46.861 --rc genhtml_legend=1 00:30:46.861 --rc geninfo_all_blocks=1 00:30:46.861 --rc geninfo_unexecuted_blocks=1 00:30:46.861 00:30:46.861 ' 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:46.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.861 --rc genhtml_branch_coverage=1 00:30:46.861 --rc genhtml_function_coverage=1 00:30:46.861 --rc genhtml_legend=1 00:30:46.861 --rc geninfo_all_blocks=1 00:30:46.861 --rc geninfo_unexecuted_blocks=1 00:30:46.861 00:30:46.861 ' 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:46.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.861 --rc genhtml_branch_coverage=1 00:30:46.861 --rc genhtml_function_coverage=1 00:30:46.861 --rc genhtml_legend=1 00:30:46.861 --rc geninfo_all_blocks=1 00:30:46.861 --rc geninfo_unexecuted_blocks=1 00:30:46.861 00:30:46.861 ' 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.861 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:46.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:46.873 ************************************ 00:30:46.873 START TEST nvmf_shutdown_tc1 00:30:46.873 ************************************ 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:46.873 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:55.018 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:55.019 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:55.019 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:55.019 Found net devices under 0000:31:00.0: cvl_0_0 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:55.019 Found net devices under 0000:31:00.1: cvl_0_1 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:55.019 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:55.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:55.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:30:55.019 00:30:55.019 --- 10.0.0.2 ping statistics --- 00:30:55.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.019 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:55.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:55.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:30:55.019 00:30:55.019 --- 10.0.0.1 ping statistics --- 00:30:55.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.019 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=1851637 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 1851637 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1851637 ']' 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:55.019 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:55.020 [2024-10-13 14:26:58.251506] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:30:55.020 [2024-10-13 14:26:58.251571] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:55.020 [2024-10-13 14:26:58.394044] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:55.020 [2024-10-13 14:26:58.443455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:55.020 [2024-10-13 14:26:58.472009] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:55.020 [2024-10-13 14:26:58.472053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:55.020 [2024-10-13 14:26:58.472072] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:55.020 [2024-10-13 14:26:58.472080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:55.020 [2024-10-13 14:26:58.472086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:55.020 [2024-10-13 14:26:58.474381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:55.020 [2024-10-13 14:26:58.474543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:55.020 [2024-10-13 14:26:58.474703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.020 [2024-10-13 14:26:58.474705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:55.592 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:55.592 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:30:55.592 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:55.592 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:55.592 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:55.592 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:55.592 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:55.592 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.592 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:55.593 [2024-10-13 14:26:59.124363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.593 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:55.593 Malloc1 00:30:55.593 [2024-10-13 14:26:59.250378] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.593 Malloc2 00:30:55.854 Malloc3 00:30:55.854 Malloc4 00:30:55.854 Malloc5 00:30:55.854 Malloc6 00:30:55.854 Malloc7 00:30:55.854 Malloc8 00:30:56.116 Malloc9 00:30:56.116 Malloc10 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1851904 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1851904 /var/tmp/bdevperf.sock 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1851904 ']' 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:56.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:56.116 { 00:30:56.116 "params": { 00:30:56.116 "name": "Nvme$subsystem", 00:30:56.116 "trtype": "$TEST_TRANSPORT", 00:30:56.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:56.116 "adrfam": "ipv4", 00:30:56.116 "trsvcid": "$NVMF_PORT", 00:30:56.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:56.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:56.116 "hdgst": ${hdgst:-false}, 00:30:56.116 "ddgst": ${ddgst:-false} 00:30:56.116 }, 00:30:56.116 "method": "bdev_nvme_attach_controller" 00:30:56.116 } 00:30:56.116 EOF 00:30:56.116 )") 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:56.116 { 00:30:56.116 "params": { 00:30:56.116 "name": "Nvme$subsystem", 00:30:56.116 "trtype": "$TEST_TRANSPORT", 00:30:56.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:56.116 "adrfam": "ipv4", 00:30:56.116 "trsvcid": "$NVMF_PORT", 00:30:56.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:56.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:56.116 "hdgst": ${hdgst:-false}, 00:30:56.116 "ddgst": ${ddgst:-false} 00:30:56.116 }, 00:30:56.116 "method": "bdev_nvme_attach_controller" 00:30:56.116 } 00:30:56.116 EOF 00:30:56.116 )") 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:56.116 { 00:30:56.116 "params": { 00:30:56.116 "name": "Nvme$subsystem", 00:30:56.116 "trtype": "$TEST_TRANSPORT", 00:30:56.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:56.116 "adrfam": "ipv4", 00:30:56.116 "trsvcid": "$NVMF_PORT", 00:30:56.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:56.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:56.116 "hdgst": ${hdgst:-false}, 00:30:56.116 "ddgst": ${ddgst:-false} 00:30:56.116 }, 00:30:56.116 "method": "bdev_nvme_attach_controller" 00:30:56.116 } 00:30:56.116 EOF 00:30:56.116 )") 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:56.116 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:56.116 { 00:30:56.116 "params": { 00:30:56.116 "name": "Nvme$subsystem", 00:30:56.116 "trtype": "$TEST_TRANSPORT", 00:30:56.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:56.116 "adrfam": "ipv4", 00:30:56.116 "trsvcid": "$NVMF_PORT", 00:30:56.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:56.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:56.116 "hdgst": ${hdgst:-false}, 00:30:56.116 "ddgst": ${ddgst:-false} 00:30:56.116 }, 00:30:56.117 "method": "bdev_nvme_attach_controller" 00:30:56.117 } 00:30:56.117 EOF 00:30:56.117 )") 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:56.117 { 00:30:56.117 "params": { 00:30:56.117 "name": "Nvme$subsystem", 00:30:56.117 "trtype": "$TEST_TRANSPORT", 00:30:56.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:56.117 "adrfam": "ipv4", 00:30:56.117 "trsvcid": "$NVMF_PORT", 00:30:56.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:56.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:56.117 "hdgst": ${hdgst:-false}, 00:30:56.117 "ddgst": ${ddgst:-false} 00:30:56.117 }, 00:30:56.117 "method": "bdev_nvme_attach_controller" 00:30:56.117 } 00:30:56.117 EOF 00:30:56.117 )") 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:56.117 { 00:30:56.117 "params": { 00:30:56.117 "name": "Nvme$subsystem", 00:30:56.117 "trtype": "$TEST_TRANSPORT", 00:30:56.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:56.117 "adrfam": "ipv4", 00:30:56.117 "trsvcid": "$NVMF_PORT", 00:30:56.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:56.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:56.117 "hdgst": ${hdgst:-false}, 00:30:56.117 "ddgst": ${ddgst:-false} 00:30:56.117 }, 00:30:56.117 "method": "bdev_nvme_attach_controller" 00:30:56.117 } 00:30:56.117 EOF 00:30:56.117 )") 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:56.117 [2024-10-13 14:26:59.764486] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:30:56.117 [2024-10-13 14:26:59.764560] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:56.117 { 00:30:56.117 "params": { 00:30:56.117 "name": "Nvme$subsystem", 00:30:56.117 "trtype": "$TEST_TRANSPORT", 00:30:56.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:56.117 "adrfam": "ipv4", 00:30:56.117 "trsvcid": "$NVMF_PORT", 00:30:56.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:56.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:56.117 "hdgst": ${hdgst:-false}, 00:30:56.117 "ddgst": ${ddgst:-false} 00:30:56.117 }, 00:30:56.117 "method": "bdev_nvme_attach_controller" 00:30:56.117 } 00:30:56.117 EOF 00:30:56.117 )") 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:56.117 { 00:30:56.117 "params": { 00:30:56.117 "name": "Nvme$subsystem", 00:30:56.117 "trtype": "$TEST_TRANSPORT", 00:30:56.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:56.117 "adrfam": "ipv4", 00:30:56.117 "trsvcid": "$NVMF_PORT", 00:30:56.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:56.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:56.117 "hdgst": ${hdgst:-false}, 00:30:56.117 "ddgst": ${ddgst:-false} 00:30:56.117 }, 00:30:56.117 "method": "bdev_nvme_attach_controller" 00:30:56.117 } 00:30:56.117 EOF 00:30:56.117 )") 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:56.117 { 00:30:56.117 "params": { 00:30:56.117 "name": "Nvme$subsystem", 00:30:56.117 "trtype": "$TEST_TRANSPORT", 00:30:56.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:56.117 "adrfam": "ipv4", 00:30:56.117 "trsvcid": "$NVMF_PORT", 00:30:56.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:56.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:56.117 "hdgst": ${hdgst:-false}, 00:30:56.117 "ddgst": ${ddgst:-false} 00:30:56.117 }, 00:30:56.117 "method": "bdev_nvme_attach_controller" 00:30:56.117 } 00:30:56.117 EOF 00:30:56.117 )") 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:56.117 { 00:30:56.117 "params": { 00:30:56.117 "name": "Nvme$subsystem", 00:30:56.117 "trtype": "$TEST_TRANSPORT", 00:30:56.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:56.117 "adrfam": "ipv4", 00:30:56.117 "trsvcid": "$NVMF_PORT", 00:30:56.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:56.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:56.117 "hdgst": ${hdgst:-false}, 00:30:56.117 "ddgst": ${ddgst:-false} 00:30:56.117 }, 00:30:56.117 "method": "bdev_nvme_attach_controller" 00:30:56.117 } 00:30:56.117 EOF 00:30:56.117 )") 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:30:56.117 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:56.117 "params": { 00:30:56.117 "name": "Nvme1", 00:30:56.117 "trtype": "tcp", 00:30:56.117 "traddr": "10.0.0.2", 00:30:56.117 "adrfam": "ipv4", 00:30:56.117 "trsvcid": "4420", 00:30:56.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:56.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:56.117 "hdgst": false, 00:30:56.117 "ddgst": false 00:30:56.117 }, 00:30:56.117 "method": "bdev_nvme_attach_controller" 00:30:56.117 },{ 00:30:56.117 "params": { 00:30:56.117 "name": "Nvme2", 00:30:56.117 "trtype": "tcp", 00:30:56.117 "traddr": "10.0.0.2", 00:30:56.117 "adrfam": "ipv4", 00:30:56.117 "trsvcid": "4420", 00:30:56.117 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:56.117 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:56.117 "hdgst": false, 00:30:56.117 "ddgst": false 00:30:56.117 }, 00:30:56.117 "method": "bdev_nvme_attach_controller" 00:30:56.117 },{ 00:30:56.117 "params": { 00:30:56.117 "name": "Nvme3", 00:30:56.117 "trtype": "tcp", 00:30:56.117 "traddr": "10.0.0.2", 00:30:56.117 "adrfam": "ipv4", 00:30:56.117 "trsvcid": "4420", 00:30:56.117 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:56.117 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:56.117 "hdgst": false, 00:30:56.117 "ddgst": false 00:30:56.117 }, 00:30:56.117 "method": "bdev_nvme_attach_controller" 00:30:56.117 },{ 00:30:56.117 "params": { 00:30:56.117 "name": "Nvme4", 00:30:56.117 "trtype": "tcp", 00:30:56.117 "traddr": "10.0.0.2", 00:30:56.117 "adrfam": "ipv4", 00:30:56.117 "trsvcid": "4420", 00:30:56.117 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:56.117 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:56.117 "hdgst": false, 00:30:56.117 "ddgst": false 00:30:56.117 }, 00:30:56.117 "method": "bdev_nvme_attach_controller" 00:30:56.117 },{ 00:30:56.117 "params": { 00:30:56.117 "name": "Nvme5", 00:30:56.117 "trtype": "tcp", 00:30:56.117 "traddr": "10.0.0.2", 00:30:56.117 "adrfam": "ipv4", 00:30:56.117 "trsvcid": "4420", 00:30:56.117 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:56.117 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:56.117 "hdgst": false, 00:30:56.117 "ddgst": false 00:30:56.117 }, 00:30:56.117 "method": "bdev_nvme_attach_controller" 00:30:56.117 },{ 00:30:56.117 "params": { 00:30:56.117 "name": "Nvme6", 00:30:56.117 "trtype": "tcp", 00:30:56.117 "traddr": "10.0.0.2", 00:30:56.117 "adrfam": "ipv4", 00:30:56.117 "trsvcid": "4420", 00:30:56.117 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:56.117 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:56.117 "hdgst": false, 00:30:56.117 "ddgst": false 00:30:56.117 }, 00:30:56.117 "method": "bdev_nvme_attach_controller" 00:30:56.117 },{ 00:30:56.117 "params": { 00:30:56.117 "name": "Nvme7", 00:30:56.117 "trtype": "tcp", 00:30:56.117 "traddr": "10.0.0.2", 00:30:56.117 "adrfam": "ipv4", 00:30:56.117 "trsvcid": "4420", 00:30:56.117 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:56.117 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:56.117 "hdgst": false, 00:30:56.117 "ddgst": false 00:30:56.117 }, 00:30:56.117 "method": "bdev_nvme_attach_controller" 00:30:56.117 },{ 00:30:56.117 "params": { 00:30:56.117 "name": "Nvme8", 00:30:56.117 "trtype": "tcp", 00:30:56.117 "traddr": "10.0.0.2", 00:30:56.117 "adrfam": "ipv4", 00:30:56.117 "trsvcid": "4420", 00:30:56.117 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:56.117 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:56.117 "hdgst": false, 00:30:56.117 "ddgst": false 00:30:56.117 }, 00:30:56.117 "method": "bdev_nvme_attach_controller" 00:30:56.117 },{ 00:30:56.117 "params": { 00:30:56.117 "name": "Nvme9", 00:30:56.117 "trtype": "tcp", 00:30:56.117 "traddr": "10.0.0.2", 00:30:56.117 "adrfam": "ipv4", 00:30:56.117 "trsvcid": "4420", 00:30:56.117 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:56.117 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:56.117 "hdgst": false, 00:30:56.118 "ddgst": false 00:30:56.118 }, 00:30:56.118 "method": "bdev_nvme_attach_controller" 00:30:56.118 },{ 00:30:56.118 "params": { 00:30:56.118 "name": "Nvme10", 00:30:56.118 "trtype": "tcp", 00:30:56.118 "traddr": "10.0.0.2", 00:30:56.118 "adrfam": "ipv4", 00:30:56.118 "trsvcid": "4420", 00:30:56.118 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:56.118 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:56.118 "hdgst": false, 00:30:56.118 "ddgst": false 00:30:56.118 }, 00:30:56.118 "method": "bdev_nvme_attach_controller" 00:30:56.118 }' 00:30:56.379 [2024-10-13 14:26:59.902483] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:56.379 [2024-10-13 14:26:59.953554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.379 [2024-10-13 14:26:59.982267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.763 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:57.763 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:30:57.763 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:57.763 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.764 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:57.764 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.764 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1851904 00:30:57.764 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:30:57.764 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:30:58.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1851904 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:30:58.707 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1851637 00:30:58.707 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:58.707 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:58.707 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:30:58.707 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:30:58.707 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:58.707 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:58.707 { 00:30:58.707 "params": { 00:30:58.707 "name": "Nvme$subsystem", 00:30:58.707 "trtype": "$TEST_TRANSPORT", 00:30:58.707 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.707 "adrfam": "ipv4", 00:30:58.707 "trsvcid": "$NVMF_PORT", 00:30:58.707 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.707 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.707 "hdgst": ${hdgst:-false}, 00:30:58.707 "ddgst": ${ddgst:-false} 00:30:58.707 }, 00:30:58.707 "method": "bdev_nvme_attach_controller" 00:30:58.707 } 00:30:58.707 EOF 00:30:58.707 )") 00:30:58.707 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:58.968 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:58.968 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:58.968 { 00:30:58.968 "params": { 00:30:58.968 "name": "Nvme$subsystem", 00:30:58.968 "trtype": "$TEST_TRANSPORT", 00:30:58.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.968 "adrfam": "ipv4", 00:30:58.968 "trsvcid": "$NVMF_PORT", 00:30:58.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.968 "hdgst": ${hdgst:-false}, 00:30:58.968 "ddgst": ${ddgst:-false} 00:30:58.968 }, 00:30:58.968 "method": "bdev_nvme_attach_controller" 00:30:58.968 } 00:30:58.968 EOF 00:30:58.968 )") 00:30:58.968 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:58.968 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:58.968 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:58.968 { 00:30:58.968 "params": { 00:30:58.968 "name": "Nvme$subsystem", 00:30:58.968 "trtype": "$TEST_TRANSPORT", 00:30:58.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.968 "adrfam": "ipv4", 00:30:58.968 "trsvcid": "$NVMF_PORT", 00:30:58.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.968 "hdgst": ${hdgst:-false}, 00:30:58.968 "ddgst": ${ddgst:-false} 00:30:58.968 }, 00:30:58.968 "method": "bdev_nvme_attach_controller" 00:30:58.968 } 00:30:58.968 EOF 00:30:58.968 )") 00:30:58.968 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:58.968 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:58.968 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:58.968 { 00:30:58.968 "params": { 00:30:58.968 "name": "Nvme$subsystem", 00:30:58.968 "trtype": "$TEST_TRANSPORT", 00:30:58.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.968 "adrfam": "ipv4", 00:30:58.968 "trsvcid": "$NVMF_PORT", 00:30:58.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.968 "hdgst": ${hdgst:-false}, 00:30:58.968 "ddgst": ${ddgst:-false} 00:30:58.968 }, 00:30:58.968 "method": "bdev_nvme_attach_controller" 00:30:58.968 } 00:30:58.968 EOF 00:30:58.968 )") 00:30:58.968 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:58.968 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:58.968 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:58.968 { 00:30:58.968 "params": { 00:30:58.968 "name": "Nvme$subsystem", 00:30:58.968 "trtype": "$TEST_TRANSPORT", 00:30:58.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.968 "adrfam": "ipv4", 00:30:58.968 "trsvcid": "$NVMF_PORT", 00:30:58.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.968 "hdgst": ${hdgst:-false}, 00:30:58.968 "ddgst": ${ddgst:-false} 00:30:58.968 }, 00:30:58.968 "method": "bdev_nvme_attach_controller" 00:30:58.968 } 00:30:58.968 EOF 00:30:58.968 )") 00:30:58.968 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:58.968 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:58.969 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:58.969 { 00:30:58.969 "params": { 00:30:58.969 "name": "Nvme$subsystem", 00:30:58.969 "trtype": "$TEST_TRANSPORT", 00:30:58.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.969 "adrfam": "ipv4", 00:30:58.969 "trsvcid": "$NVMF_PORT", 00:30:58.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.969 "hdgst": ${hdgst:-false}, 00:30:58.969 "ddgst": ${ddgst:-false} 00:30:58.969 }, 00:30:58.969 "method": "bdev_nvme_attach_controller" 00:30:58.969 } 00:30:58.969 EOF 00:30:58.969 )") 00:30:58.969 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:58.969 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:58.969 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:58.969 { 00:30:58.969 "params": { 00:30:58.969 "name": "Nvme$subsystem", 00:30:58.969 "trtype": "$TEST_TRANSPORT", 00:30:58.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.969 "adrfam": "ipv4", 00:30:58.969 "trsvcid": "$NVMF_PORT", 00:30:58.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.969 "hdgst": ${hdgst:-false}, 00:30:58.969 "ddgst": ${ddgst:-false} 00:30:58.969 }, 00:30:58.969 "method": "bdev_nvme_attach_controller" 00:30:58.969 } 00:30:58.969 EOF 00:30:58.969 )") 00:30:58.969 [2024-10-13 14:27:02.457129] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:30:58.969 [2024-10-13 14:27:02.457186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1852687 ] 00:30:58.969 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:58.969 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:58.969 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:58.969 { 00:30:58.969 "params": { 00:30:58.969 "name": "Nvme$subsystem", 00:30:58.969 "trtype": "$TEST_TRANSPORT", 00:30:58.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.969 "adrfam": "ipv4", 00:30:58.969 "trsvcid": "$NVMF_PORT", 00:30:58.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.969 "hdgst": ${hdgst:-false}, 00:30:58.969 "ddgst": ${ddgst:-false} 00:30:58.969 }, 00:30:58.969 "method": "bdev_nvme_attach_controller" 00:30:58.969 } 00:30:58.969 EOF 00:30:58.969 )") 00:30:58.969 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:58.969 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:58.969 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:58.969 { 00:30:58.969 "params": { 00:30:58.969 "name": "Nvme$subsystem", 00:30:58.969 "trtype": "$TEST_TRANSPORT", 00:30:58.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.969 "adrfam": "ipv4", 00:30:58.969 "trsvcid": "$NVMF_PORT", 00:30:58.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.969 "hdgst": ${hdgst:-false}, 00:30:58.969 "ddgst": ${ddgst:-false} 00:30:58.969 }, 00:30:58.969 "method": "bdev_nvme_attach_controller" 00:30:58.969 } 00:30:58.969 EOF 00:30:58.969 )") 00:30:58.969 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:58.969 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:58.969 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:58.969 { 00:30:58.969 "params": { 00:30:58.969 "name": "Nvme$subsystem", 00:30:58.969 "trtype": "$TEST_TRANSPORT", 00:30:58.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.969 "adrfam": "ipv4", 00:30:58.969 "trsvcid": "$NVMF_PORT", 00:30:58.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.969 "hdgst": ${hdgst:-false}, 00:30:58.969 "ddgst": ${ddgst:-false} 00:30:58.969 }, 00:30:58.969 "method": "bdev_nvme_attach_controller" 00:30:58.969 } 00:30:58.969 EOF 00:30:58.969 )") 00:30:58.969 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:58.969 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:30:58.969 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:30:58.969 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:58.969 "params": { 00:30:58.969 "name": "Nvme1", 00:30:58.969 "trtype": "tcp", 00:30:58.969 "traddr": "10.0.0.2", 00:30:58.969 "adrfam": "ipv4", 00:30:58.969 "trsvcid": "4420", 00:30:58.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:58.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:58.969 "hdgst": false, 00:30:58.969 "ddgst": false 00:30:58.969 }, 00:30:58.969 "method": "bdev_nvme_attach_controller" 00:30:58.969 },{ 00:30:58.969 "params": { 00:30:58.969 "name": "Nvme2", 00:30:58.969 "trtype": "tcp", 00:30:58.969 "traddr": "10.0.0.2", 00:30:58.969 "adrfam": "ipv4", 00:30:58.969 "trsvcid": "4420", 00:30:58.969 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:58.969 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:58.969 "hdgst": false, 00:30:58.969 "ddgst": false 00:30:58.969 }, 00:30:58.969 "method": "bdev_nvme_attach_controller" 00:30:58.969 },{ 00:30:58.969 "params": { 00:30:58.969 "name": "Nvme3", 00:30:58.969 "trtype": "tcp", 00:30:58.969 "traddr": "10.0.0.2", 00:30:58.969 "adrfam": "ipv4", 00:30:58.969 "trsvcid": "4420", 00:30:58.969 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:58.969 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:58.969 "hdgst": false, 00:30:58.969 "ddgst": false 00:30:58.969 }, 00:30:58.969 "method": "bdev_nvme_attach_controller" 00:30:58.969 },{ 00:30:58.969 "params": { 00:30:58.969 "name": "Nvme4", 00:30:58.969 "trtype": "tcp", 00:30:58.969 "traddr": "10.0.0.2", 00:30:58.969 "adrfam": "ipv4", 00:30:58.969 "trsvcid": "4420", 00:30:58.969 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:58.969 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:58.969 "hdgst": false, 00:30:58.969 "ddgst": false 00:30:58.969 }, 00:30:58.969 "method": "bdev_nvme_attach_controller" 00:30:58.969 },{ 00:30:58.969 "params": { 00:30:58.969 "name": "Nvme5", 00:30:58.969 "trtype": "tcp", 00:30:58.969 "traddr": "10.0.0.2", 00:30:58.969 "adrfam": "ipv4", 00:30:58.969 "trsvcid": "4420", 00:30:58.969 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:58.969 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:58.969 "hdgst": false, 00:30:58.969 "ddgst": false 00:30:58.969 }, 00:30:58.969 "method": "bdev_nvme_attach_controller" 00:30:58.969 },{ 00:30:58.969 "params": { 00:30:58.969 "name": "Nvme6", 00:30:58.969 "trtype": "tcp", 00:30:58.969 "traddr": "10.0.0.2", 00:30:58.969 "adrfam": "ipv4", 00:30:58.969 "trsvcid": "4420", 00:30:58.969 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:58.969 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:58.969 "hdgst": false, 00:30:58.969 "ddgst": false 00:30:58.969 }, 00:30:58.969 "method": "bdev_nvme_attach_controller" 00:30:58.969 },{ 00:30:58.969 "params": { 00:30:58.969 "name": "Nvme7", 00:30:58.969 "trtype": "tcp", 00:30:58.969 "traddr": "10.0.0.2", 00:30:58.969 "adrfam": "ipv4", 00:30:58.969 "trsvcid": "4420", 00:30:58.969 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:58.969 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:58.969 "hdgst": false, 00:30:58.969 "ddgst": false 00:30:58.969 }, 00:30:58.969 "method": "bdev_nvme_attach_controller" 00:30:58.969 },{ 00:30:58.969 "params": { 00:30:58.969 "name": "Nvme8", 00:30:58.969 "trtype": "tcp", 00:30:58.969 "traddr": "10.0.0.2", 00:30:58.969 "adrfam": "ipv4", 00:30:58.969 "trsvcid": "4420", 00:30:58.969 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:58.969 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:58.969 "hdgst": false, 00:30:58.969 "ddgst": false 00:30:58.969 }, 00:30:58.969 "method": "bdev_nvme_attach_controller" 00:30:58.969 },{ 00:30:58.969 "params": { 00:30:58.969 "name": "Nvme9", 00:30:58.969 "trtype": "tcp", 00:30:58.969 "traddr": "10.0.0.2", 00:30:58.969 "adrfam": "ipv4", 00:30:58.969 "trsvcid": "4420", 00:30:58.969 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:58.969 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:58.969 "hdgst": false, 00:30:58.969 "ddgst": false 00:30:58.969 }, 00:30:58.969 "method": "bdev_nvme_attach_controller" 00:30:58.969 },{ 00:30:58.969 "params": { 00:30:58.969 "name": "Nvme10", 00:30:58.969 "trtype": "tcp", 00:30:58.969 "traddr": "10.0.0.2", 00:30:58.969 "adrfam": "ipv4", 00:30:58.969 "trsvcid": "4420", 00:30:58.969 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:58.969 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:58.969 "hdgst": false, 00:30:58.969 "ddgst": false 00:30:58.969 }, 00:30:58.969 "method": "bdev_nvme_attach_controller" 00:30:58.969 }' 00:30:58.969 [2024-10-13 14:27:02.589507] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:58.969 [2024-10-13 14:27:02.637079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.969 [2024-10-13 14:27:02.654872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.355 Running I/O for 1 seconds... 00:31:01.561 1871.00 IOPS, 116.94 MiB/s 00:31:01.561 Latency(us) 00:31:01.561 [2024-10-13T12:27:05.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.561 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:01.561 Verification LBA range: start 0x0 length 0x400 00:31:01.561 Nvme1n1 : 1.13 232.52 14.53 0.00 0.00 267023.47 9853.39 255750.24 00:31:01.561 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:01.561 Verification LBA range: start 0x0 length 0x400 00:31:01.561 Nvme2n1 : 1.15 222.15 13.88 0.00 0.00 280510.74 34377.39 236481.39 00:31:01.561 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:01.561 Verification LBA range: start 0x0 length 0x400 00:31:01.561 Nvme3n1 : 1.13 229.86 14.37 0.00 0.00 260186.17 18830.93 255750.24 00:31:01.561 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:01.561 Verification LBA range: start 0x0 length 0x400 00:31:01.561 Nvme4n1 : 1.16 275.87 17.24 0.00 0.00 218165.54 13630.52 248743.39 00:31:01.561 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:01.561 Verification LBA range: start 0x0 length 0x400 00:31:01.561 Nvme5n1 : 1.14 224.27 14.02 0.00 0.00 263589.76 19159.37 243488.25 00:31:01.561 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:01.561 Verification LBA range: start 0x0 length 0x400 00:31:01.561 Nvme6n1 : 1.19 220.16 13.76 0.00 0.00 254224.73 7718.49 252246.82 00:31:01.561 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:01.561 Verification LBA range: start 0x0 length 0x400 00:31:01.561 Nvme7n1 : 1.16 276.99 17.31 0.00 0.00 205845.55 15984.39 245239.96 00:31:01.561 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:01.561 Verification LBA range: start 0x0 length 0x400 00:31:01.561 Nvme8n1 : 1.19 268.52 16.78 0.00 0.00 209407.66 12645.19 246991.67 00:31:01.561 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:01.561 Verification LBA range: start 0x0 length 0x400 00:31:01.561 Nvme9n1 : 1.18 224.09 14.01 0.00 0.00 241032.64 9962.87 253998.53 00:31:01.561 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:01.561 Verification LBA range: start 0x0 length 0x400 00:31:01.561 Nvme10n1 : 1.20 266.28 16.64 0.00 0.00 203893.09 8375.38 269763.96 00:31:01.561 [2024-10-13T12:27:05.268Z] =================================================================================================================== 00:31:01.561 [2024-10-13T12:27:05.268Z] Total : 2440.70 152.54 0.00 0.00 237701.13 7718.49 269763.96 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:01.823 rmmod nvme_tcp 00:31:01.823 rmmod nvme_fabrics 00:31:01.823 rmmod nvme_keyring 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 1851637 ']' 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 1851637 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1851637 ']' 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1851637 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1851637 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1851637' 00:31:01.823 killing process with pid 1851637 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1851637 00:31:01.823 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1851637 00:31:02.085 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:02.085 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:02.085 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:02.085 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:31:02.085 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:31:02.085 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:02.085 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:31:02.085 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:02.085 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:02.085 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.085 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:02.085 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:04.637 00:31:04.637 real 0m17.337s 00:31:04.637 user 0m34.859s 00:31:04.637 sys 0m7.062s 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:04.637 ************************************ 00:31:04.637 END TEST nvmf_shutdown_tc1 00:31:04.637 ************************************ 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:04.637 ************************************ 00:31:04.637 START TEST nvmf_shutdown_tc2 00:31:04.637 ************************************ 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.637 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:04.638 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:04.638 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:04.638 Found net devices under 0000:31:00.0: cvl_0_0 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:04.638 Found net devices under 0000:31:00.1: cvl_0_1 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:04.638 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:04.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:31:04.638 00:31:04.638 --- 10.0.0.2 ping statistics --- 00:31:04.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.638 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:04.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:31:04.638 00:31:04.638 --- 10.0.0.1 ping statistics --- 00:31:04.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.638 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1853809 00:31:04.638 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1853809 00:31:04.639 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:04.639 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1853809 ']' 00:31:04.639 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.639 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:04.639 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.639 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:04.639 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:04.639 [2024-10-13 14:27:08.284815] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:31:04.639 [2024-10-13 14:27:08.284879] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.899 [2024-10-13 14:27:08.426735] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:04.899 [2024-10-13 14:27:08.475859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:04.899 [2024-10-13 14:27:08.499708] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.899 [2024-10-13 14:27:08.499745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.899 [2024-10-13 14:27:08.499751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.899 [2024-10-13 14:27:08.499756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.899 [2024-10-13 14:27:08.499760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.899 [2024-10-13 14:27:08.501401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:04.899 [2024-10-13 14:27:08.501557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:04.899 [2024-10-13 14:27:08.501711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.899 [2024-10-13 14:27:08.501712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:05.471 [2024-10-13 14:27:09.130260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.471 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:05.732 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.732 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:05.732 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.732 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:05.732 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:05.732 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:31:05.732 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:31:05.732 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.732 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:05.732 Malloc1 00:31:05.732 [2024-10-13 14:27:09.237315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.732 Malloc2 00:31:05.732 Malloc3 00:31:05.732 Malloc4 00:31:05.732 Malloc5 00:31:05.732 Malloc6 00:31:05.994 Malloc7 00:31:05.994 Malloc8 00:31:05.994 Malloc9 00:31:05.994 Malloc10 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1854508 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1854508 /var/tmp/bdevperf.sock 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1854508 ']' 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:05.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:05.994 { 00:31:05.994 "params": { 00:31:05.994 "name": "Nvme$subsystem", 00:31:05.994 "trtype": "$TEST_TRANSPORT", 00:31:05.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.994 "adrfam": "ipv4", 00:31:05.994 "trsvcid": "$NVMF_PORT", 00:31:05.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.994 "hdgst": ${hdgst:-false}, 00:31:05.994 "ddgst": ${ddgst:-false} 00:31:05.994 }, 00:31:05.994 "method": "bdev_nvme_attach_controller" 00:31:05.994 } 00:31:05.994 EOF 00:31:05.994 )") 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:05.994 { 00:31:05.994 "params": { 00:31:05.994 "name": "Nvme$subsystem", 00:31:05.994 "trtype": "$TEST_TRANSPORT", 00:31:05.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.994 "adrfam": "ipv4", 00:31:05.994 "trsvcid": "$NVMF_PORT", 00:31:05.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.994 "hdgst": ${hdgst:-false}, 00:31:05.994 "ddgst": ${ddgst:-false} 00:31:05.994 }, 00:31:05.994 "method": "bdev_nvme_attach_controller" 00:31:05.994 } 00:31:05.994 EOF 00:31:05.994 )") 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:05.994 { 00:31:05.994 "params": { 00:31:05.994 "name": "Nvme$subsystem", 00:31:05.994 "trtype": "$TEST_TRANSPORT", 00:31:05.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.994 "adrfam": "ipv4", 00:31:05.994 "trsvcid": "$NVMF_PORT", 00:31:05.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.994 "hdgst": ${hdgst:-false}, 00:31:05.994 "ddgst": ${ddgst:-false} 00:31:05.994 }, 00:31:05.994 "method": "bdev_nvme_attach_controller" 00:31:05.994 } 00:31:05.994 EOF 00:31:05.994 )") 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:31:05.994 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:05.995 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:05.995 { 00:31:05.995 "params": { 00:31:05.995 "name": "Nvme$subsystem", 00:31:05.995 "trtype": "$TEST_TRANSPORT", 00:31:05.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.995 "adrfam": "ipv4", 00:31:05.995 "trsvcid": "$NVMF_PORT", 00:31:05.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.995 "hdgst": ${hdgst:-false}, 00:31:05.995 "ddgst": ${ddgst:-false} 00:31:05.995 }, 00:31:05.995 "method": "bdev_nvme_attach_controller" 00:31:05.995 } 00:31:05.995 EOF 00:31:05.995 )") 00:31:05.995 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:31:05.995 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:05.995 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:05.995 { 00:31:05.995 "params": { 00:31:05.995 "name": "Nvme$subsystem", 00:31:05.995 "trtype": "$TEST_TRANSPORT", 00:31:05.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.995 "adrfam": "ipv4", 00:31:05.995 "trsvcid": "$NVMF_PORT", 00:31:05.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.995 "hdgst": ${hdgst:-false}, 00:31:05.995 "ddgst": ${ddgst:-false} 00:31:05.995 }, 00:31:05.995 "method": "bdev_nvme_attach_controller" 00:31:05.995 } 00:31:05.995 EOF 00:31:05.995 )") 00:31:05.995 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:31:05.995 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:05.995 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:05.995 { 00:31:05.995 "params": { 00:31:05.995 "name": "Nvme$subsystem", 00:31:05.995 "trtype": "$TEST_TRANSPORT", 00:31:05.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.995 "adrfam": "ipv4", 00:31:05.995 "trsvcid": "$NVMF_PORT", 00:31:05.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.995 "hdgst": ${hdgst:-false}, 00:31:05.995 "ddgst": ${ddgst:-false} 00:31:05.995 }, 00:31:05.995 "method": "bdev_nvme_attach_controller" 00:31:05.995 } 00:31:05.995 EOF 00:31:05.995 )") 00:31:05.995 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:31:05.995 [2024-10-13 14:27:09.688570] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:31:05.995 [2024-10-13 14:27:09.688626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1854508 ] 00:31:05.995 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:05.995 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:05.995 { 00:31:05.995 "params": { 00:31:05.995 "name": "Nvme$subsystem", 00:31:05.995 "trtype": "$TEST_TRANSPORT", 00:31:05.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.995 "adrfam": "ipv4", 00:31:05.995 "trsvcid": "$NVMF_PORT", 00:31:05.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.995 "hdgst": ${hdgst:-false}, 00:31:05.995 "ddgst": ${ddgst:-false} 00:31:05.995 }, 00:31:05.995 "method": "bdev_nvme_attach_controller" 00:31:05.995 } 00:31:05.995 EOF 00:31:05.995 )") 00:31:05.995 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:31:05.995 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:05.995 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:05.995 { 00:31:05.995 "params": { 00:31:05.995 "name": "Nvme$subsystem", 00:31:05.995 "trtype": "$TEST_TRANSPORT", 00:31:05.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.995 "adrfam": "ipv4", 00:31:05.995 "trsvcid": "$NVMF_PORT", 00:31:05.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.995 "hdgst": ${hdgst:-false}, 00:31:05.995 "ddgst": ${ddgst:-false} 00:31:05.995 }, 00:31:05.995 "method": "bdev_nvme_attach_controller" 00:31:05.995 } 00:31:05.995 EOF 00:31:05.995 )") 00:31:05.995 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:31:06.303 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:06.303 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:06.303 { 00:31:06.303 "params": { 00:31:06.303 "name": "Nvme$subsystem", 00:31:06.303 "trtype": "$TEST_TRANSPORT", 00:31:06.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:06.303 "adrfam": "ipv4", 00:31:06.303 "trsvcid": "$NVMF_PORT", 00:31:06.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:06.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:06.303 "hdgst": ${hdgst:-false}, 00:31:06.303 "ddgst": ${ddgst:-false} 00:31:06.303 }, 00:31:06.303 "method": "bdev_nvme_attach_controller" 00:31:06.303 } 00:31:06.303 EOF 00:31:06.303 )") 00:31:06.303 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:31:06.303 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:06.303 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:06.303 { 00:31:06.303 "params": { 00:31:06.303 "name": "Nvme$subsystem", 00:31:06.303 "trtype": "$TEST_TRANSPORT", 00:31:06.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:06.303 "adrfam": "ipv4", 00:31:06.303 "trsvcid": "$NVMF_PORT", 00:31:06.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:06.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:06.303 "hdgst": ${hdgst:-false}, 00:31:06.303 "ddgst": ${ddgst:-false} 00:31:06.303 }, 00:31:06.303 "method": "bdev_nvme_attach_controller" 00:31:06.303 } 00:31:06.303 EOF 00:31:06.303 )") 00:31:06.303 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:31:06.303 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:31:06.303 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:31:06.303 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:06.303 "params": { 00:31:06.303 "name": "Nvme1", 00:31:06.303 "trtype": "tcp", 00:31:06.303 "traddr": "10.0.0.2", 00:31:06.303 "adrfam": "ipv4", 00:31:06.303 "trsvcid": "4420", 00:31:06.303 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:06.303 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:06.303 "hdgst": false, 00:31:06.303 "ddgst": false 00:31:06.303 }, 00:31:06.303 "method": "bdev_nvme_attach_controller" 00:31:06.303 },{ 00:31:06.303 "params": { 00:31:06.303 "name": "Nvme2", 00:31:06.303 "trtype": "tcp", 00:31:06.303 "traddr": "10.0.0.2", 00:31:06.303 "adrfam": "ipv4", 00:31:06.303 "trsvcid": "4420", 00:31:06.303 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:06.303 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:06.303 "hdgst": false, 00:31:06.303 "ddgst": false 00:31:06.303 }, 00:31:06.303 "method": "bdev_nvme_attach_controller" 00:31:06.303 },{ 00:31:06.303 "params": { 00:31:06.303 "name": "Nvme3", 00:31:06.303 "trtype": "tcp", 00:31:06.303 "traddr": "10.0.0.2", 00:31:06.303 "adrfam": "ipv4", 00:31:06.303 "trsvcid": "4420", 00:31:06.303 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:06.303 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:06.303 "hdgst": false, 00:31:06.303 "ddgst": false 00:31:06.303 }, 00:31:06.303 "method": "bdev_nvme_attach_controller" 00:31:06.303 },{ 00:31:06.303 "params": { 00:31:06.303 "name": "Nvme4", 00:31:06.303 "trtype": "tcp", 00:31:06.303 "traddr": "10.0.0.2", 00:31:06.303 "adrfam": "ipv4", 00:31:06.303 "trsvcid": "4420", 00:31:06.303 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:06.303 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:06.303 "hdgst": false, 00:31:06.303 "ddgst": false 00:31:06.303 }, 00:31:06.303 "method": "bdev_nvme_attach_controller" 00:31:06.303 },{ 00:31:06.303 "params": { 00:31:06.303 "name": "Nvme5", 00:31:06.303 "trtype": "tcp", 00:31:06.303 "traddr": "10.0.0.2", 00:31:06.303 "adrfam": "ipv4", 00:31:06.303 "trsvcid": "4420", 00:31:06.303 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:06.303 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:06.303 "hdgst": false, 00:31:06.303 "ddgst": false 00:31:06.303 }, 00:31:06.303 "method": "bdev_nvme_attach_controller" 00:31:06.303 },{ 00:31:06.303 "params": { 00:31:06.304 "name": "Nvme6", 00:31:06.304 "trtype": "tcp", 00:31:06.304 "traddr": "10.0.0.2", 00:31:06.304 "adrfam": "ipv4", 00:31:06.304 "trsvcid": "4420", 00:31:06.304 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:06.304 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:06.304 "hdgst": false, 00:31:06.304 "ddgst": false 00:31:06.304 }, 00:31:06.304 "method": "bdev_nvme_attach_controller" 00:31:06.304 },{ 00:31:06.304 "params": { 00:31:06.304 "name": "Nvme7", 00:31:06.304 "trtype": "tcp", 00:31:06.304 "traddr": "10.0.0.2", 00:31:06.304 "adrfam": "ipv4", 00:31:06.304 "trsvcid": "4420", 00:31:06.304 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:06.304 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:06.304 "hdgst": false, 00:31:06.304 "ddgst": false 00:31:06.304 }, 00:31:06.304 "method": "bdev_nvme_attach_controller" 00:31:06.304 },{ 00:31:06.304 "params": { 00:31:06.304 "name": "Nvme8", 00:31:06.304 "trtype": "tcp", 00:31:06.304 "traddr": "10.0.0.2", 00:31:06.304 "adrfam": "ipv4", 00:31:06.304 "trsvcid": "4420", 00:31:06.304 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:06.304 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:06.304 "hdgst": false, 00:31:06.304 "ddgst": false 00:31:06.304 }, 00:31:06.304 "method": "bdev_nvme_attach_controller" 00:31:06.304 },{ 00:31:06.304 "params": { 00:31:06.304 "name": "Nvme9", 00:31:06.304 "trtype": "tcp", 00:31:06.304 "traddr": "10.0.0.2", 00:31:06.304 "adrfam": "ipv4", 00:31:06.304 "trsvcid": "4420", 00:31:06.304 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:06.304 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:06.304 "hdgst": false, 00:31:06.304 "ddgst": false 00:31:06.304 }, 00:31:06.304 "method": "bdev_nvme_attach_controller" 00:31:06.304 },{ 00:31:06.304 "params": { 00:31:06.304 "name": "Nvme10", 00:31:06.304 "trtype": "tcp", 00:31:06.304 "traddr": "10.0.0.2", 00:31:06.304 "adrfam": "ipv4", 00:31:06.304 "trsvcid": "4420", 00:31:06.304 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:06.304 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:06.304 "hdgst": false, 00:31:06.304 "ddgst": false 00:31:06.304 }, 00:31:06.304 "method": "bdev_nvme_attach_controller" 00:31:06.304 }' 00:31:06.304 [2024-10-13 14:27:09.820128] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:06.304 [2024-10-13 14:27:09.870696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.304 [2024-10-13 14:27:09.888964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.763 Running I/O for 10 seconds... 00:31:07.763 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:07.763 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:31:07.763 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:07.763 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.763 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:08.024 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.024 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:31:08.024 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:08.024 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:31:08.024 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:31:08.024 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:31:08.024 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:31:08.024 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:31:08.024 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:08.024 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:31:08.024 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.024 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:08.024 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.024 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:31:08.024 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:31:08.024 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:31:08.284 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:31:08.284 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:31:08.284 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:08.284 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:31:08.284 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.284 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:08.284 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.284 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:31:08.284 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:31:08.284 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:31:08.546 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:31:08.546 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:31:08.546 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:08.546 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:31:08.546 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.546 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:08.807 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.807 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:31:08.807 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:31:08.807 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:31:08.807 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:31:08.807 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:31:08.807 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1854508 00:31:08.807 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1854508 ']' 00:31:08.807 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1854508 00:31:08.807 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:31:08.807 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:08.807 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1854508 00:31:08.807 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:08.807 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:08.807 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1854508' 00:31:08.807 killing process with pid 1854508 00:31:08.807 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1854508 00:31:08.807 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1854508 00:31:08.807 1668.00 IOPS, 104.25 MiB/s [2024-10-13T12:27:12.514Z] Received shutdown signal, test time was about 1.090988 seconds 00:31:08.807 00:31:08.807 Latency(us) 00:31:08.807 [2024-10-13T12:27:12.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.807 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:08.807 Verification LBA range: start 0x0 length 0x400 00:31:08.807 Nvme1n1 : 1.09 234.91 14.68 0.00 0.00 268323.37 16531.80 297791.38 00:31:08.807 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:08.807 Verification LBA range: start 0x0 length 0x400 00:31:08.807 Nvme2n1 : 1.08 237.77 14.86 0.00 0.00 258147.09 22224.87 269763.96 00:31:08.807 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:08.807 Verification LBA range: start 0x0 length 0x400 00:31:08.807 Nvme3n1 : 1.08 240.15 15.01 0.00 0.00 248035.54 4160.32 245239.96 00:31:08.807 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:08.807 Verification LBA range: start 0x0 length 0x400 00:31:08.808 Nvme4n1 : 1.07 239.44 14.97 0.00 0.00 242173.93 10729.25 253998.53 00:31:08.808 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:08.808 Verification LBA range: start 0x0 length 0x400 00:31:08.808 Nvme5n1 : 1.06 180.48 11.28 0.00 0.00 311472.94 19487.82 253998.53 00:31:08.808 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:08.808 Verification LBA range: start 0x0 length 0x400 00:31:08.808 Nvme6n1 : 1.06 181.78 11.36 0.00 0.00 299982.16 20801.60 266260.53 00:31:08.808 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:08.808 Verification LBA range: start 0x0 length 0x400 00:31:08.808 Nvme7n1 : 1.09 235.39 14.71 0.00 0.00 225662.76 18064.55 276770.81 00:31:08.808 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:08.808 Verification LBA range: start 0x0 length 0x400 00:31:08.808 Nvme8n1 : 1.08 236.81 14.80 0.00 0.00 217091.51 18940.41 252246.82 00:31:08.808 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:08.808 Verification LBA range: start 0x0 length 0x400 00:31:08.808 Nvme9n1 : 1.07 178.99 11.19 0.00 0.00 277383.51 22005.91 275019.10 00:31:08.808 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:08.808 Verification LBA range: start 0x0 length 0x400 00:31:08.808 Nvme10n1 : 1.06 180.66 11.29 0.00 0.00 264231.40 18611.96 253998.53 00:31:08.808 [2024-10-13T12:27:12.515Z] =================================================================================================================== 00:31:08.808 [2024-10-13T12:27:12.515Z] Total : 2146.40 134.15 0.00 0.00 258230.82 4160.32 297791.38 00:31:09.069 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1853809 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:10.010 rmmod nvme_tcp 00:31:10.010 rmmod nvme_fabrics 00:31:10.010 rmmod nvme_keyring 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 1853809 ']' 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 1853809 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1853809 ']' 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1853809 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1853809 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1853809' 00:31:10.010 killing process with pid 1853809 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1853809 00:31:10.010 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1853809 00:31:10.270 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:10.270 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:10.270 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:10.270 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:31:10.270 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:31:10.270 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:10.270 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:31:10.270 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:10.270 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:10.270 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.270 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.270 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.817 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:12.817 00:31:12.817 real 0m8.152s 00:31:12.817 user 0m24.539s 00:31:12.817 sys 0m1.316s 00:31:12.817 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:12.817 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:12.817 ************************************ 00:31:12.817 END TEST nvmf_shutdown_tc2 00:31:12.817 ************************************ 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:12.817 ************************************ 00:31:12.817 START TEST nvmf_shutdown_tc3 00:31:12.817 ************************************ 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:12.817 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:12.818 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:12.818 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:12.818 Found net devices under 0000:31:00.0: cvl_0_0 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:12.818 Found net devices under 0000:31:00.1: cvl_0_1 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:12.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:12.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:31:12.818 00:31:12.818 --- 10.0.0.2 ping statistics --- 00:31:12.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.818 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:12.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:12.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:31:12.818 00:31:12.818 --- 10.0.0.1 ping statistics --- 00:31:12.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.818 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=1856007 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 1856007 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1856007 ']' 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:12.818 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:13.079 [2024-10-13 14:27:16.522921] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:31:13.079 [2024-10-13 14:27:16.522985] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.079 [2024-10-13 14:27:16.664467] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:13.079 [2024-10-13 14:27:16.710089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:13.079 [2024-10-13 14:27:16.727068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.079 [2024-10-13 14:27:16.727096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.079 [2024-10-13 14:27:16.727101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:13.079 [2024-10-13 14:27:16.727106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:13.079 [2024-10-13 14:27:16.727110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.079 [2024-10-13 14:27:16.728423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:13.079 [2024-10-13 14:27:16.728575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:13.079 [2024-10-13 14:27:16.728712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.079 [2024-10-13 14:27:16.728714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:13.650 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:13.650 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:31:13.650 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:13.650 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:13.650 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:13.911 [2024-10-13 14:27:17.382742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.911 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:13.911 Malloc1 00:31:13.911 [2024-10-13 14:27:17.492870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:13.911 Malloc2 00:31:13.911 Malloc3 00:31:13.911 Malloc4 00:31:14.172 Malloc5 00:31:14.172 Malloc6 00:31:14.172 Malloc7 00:31:14.172 Malloc8 00:31:14.172 Malloc9 00:31:14.172 Malloc10 00:31:14.172 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.172 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:31:14.172 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:14.172 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:14.432 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1856229 00:31:14.432 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1856229 /var/tmp/bdevperf.sock 00:31:14.432 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1856229 ']' 00:31:14.432 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:14.432 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:14.432 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:14.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:14.432 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:14.432 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:14.432 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:14.432 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:14.433 { 00:31:14.433 "params": { 00:31:14.433 "name": "Nvme$subsystem", 00:31:14.433 "trtype": "$TEST_TRANSPORT", 00:31:14.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.433 "adrfam": "ipv4", 00:31:14.433 "trsvcid": "$NVMF_PORT", 00:31:14.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.433 "hdgst": ${hdgst:-false}, 00:31:14.433 "ddgst": ${ddgst:-false} 00:31:14.433 }, 00:31:14.433 "method": "bdev_nvme_attach_controller" 00:31:14.433 } 00:31:14.433 EOF 00:31:14.433 )") 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:14.433 { 00:31:14.433 "params": { 00:31:14.433 "name": "Nvme$subsystem", 00:31:14.433 "trtype": "$TEST_TRANSPORT", 00:31:14.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.433 "adrfam": "ipv4", 00:31:14.433 "trsvcid": "$NVMF_PORT", 00:31:14.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.433 "hdgst": ${hdgst:-false}, 00:31:14.433 "ddgst": ${ddgst:-false} 00:31:14.433 }, 00:31:14.433 "method": "bdev_nvme_attach_controller" 00:31:14.433 } 00:31:14.433 EOF 00:31:14.433 )") 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:14.433 { 00:31:14.433 "params": { 00:31:14.433 "name": "Nvme$subsystem", 00:31:14.433 "trtype": "$TEST_TRANSPORT", 00:31:14.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.433 "adrfam": "ipv4", 00:31:14.433 "trsvcid": "$NVMF_PORT", 00:31:14.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.433 "hdgst": ${hdgst:-false}, 00:31:14.433 "ddgst": ${ddgst:-false} 00:31:14.433 }, 00:31:14.433 "method": "bdev_nvme_attach_controller" 00:31:14.433 } 00:31:14.433 EOF 00:31:14.433 )") 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:14.433 { 00:31:14.433 "params": { 00:31:14.433 "name": "Nvme$subsystem", 00:31:14.433 "trtype": "$TEST_TRANSPORT", 00:31:14.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.433 "adrfam": "ipv4", 00:31:14.433 "trsvcid": "$NVMF_PORT", 00:31:14.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.433 "hdgst": ${hdgst:-false}, 00:31:14.433 "ddgst": ${ddgst:-false} 00:31:14.433 }, 00:31:14.433 "method": "bdev_nvme_attach_controller" 00:31:14.433 } 00:31:14.433 EOF 00:31:14.433 )") 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:14.433 { 00:31:14.433 "params": { 00:31:14.433 "name": "Nvme$subsystem", 00:31:14.433 "trtype": "$TEST_TRANSPORT", 00:31:14.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.433 "adrfam": "ipv4", 00:31:14.433 "trsvcid": "$NVMF_PORT", 00:31:14.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.433 "hdgst": ${hdgst:-false}, 00:31:14.433 "ddgst": ${ddgst:-false} 00:31:14.433 }, 00:31:14.433 "method": "bdev_nvme_attach_controller" 00:31:14.433 } 00:31:14.433 EOF 00:31:14.433 )") 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:14.433 { 00:31:14.433 "params": { 00:31:14.433 "name": "Nvme$subsystem", 00:31:14.433 "trtype": "$TEST_TRANSPORT", 00:31:14.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.433 "adrfam": "ipv4", 00:31:14.433 "trsvcid": "$NVMF_PORT", 00:31:14.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.433 "hdgst": ${hdgst:-false}, 00:31:14.433 "ddgst": ${ddgst:-false} 00:31:14.433 }, 00:31:14.433 "method": "bdev_nvme_attach_controller" 00:31:14.433 } 00:31:14.433 EOF 00:31:14.433 )") 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:31:14.433 [2024-10-13 14:27:17.938342] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:31:14.433 [2024-10-13 14:27:17.938397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856229 ] 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:14.433 { 00:31:14.433 "params": { 00:31:14.433 "name": "Nvme$subsystem", 00:31:14.433 "trtype": "$TEST_TRANSPORT", 00:31:14.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.433 "adrfam": "ipv4", 00:31:14.433 "trsvcid": "$NVMF_PORT", 00:31:14.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.433 "hdgst": ${hdgst:-false}, 00:31:14.433 "ddgst": ${ddgst:-false} 00:31:14.433 }, 00:31:14.433 "method": "bdev_nvme_attach_controller" 00:31:14.433 } 00:31:14.433 EOF 00:31:14.433 )") 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:14.433 { 00:31:14.433 "params": { 00:31:14.433 "name": "Nvme$subsystem", 00:31:14.433 "trtype": "$TEST_TRANSPORT", 00:31:14.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.433 "adrfam": "ipv4", 00:31:14.433 "trsvcid": "$NVMF_PORT", 00:31:14.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.433 "hdgst": ${hdgst:-false}, 00:31:14.433 "ddgst": ${ddgst:-false} 00:31:14.433 }, 00:31:14.433 "method": "bdev_nvme_attach_controller" 00:31:14.433 } 00:31:14.433 EOF 00:31:14.433 )") 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:14.433 { 00:31:14.433 "params": { 00:31:14.433 "name": "Nvme$subsystem", 00:31:14.433 "trtype": "$TEST_TRANSPORT", 00:31:14.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.433 "adrfam": "ipv4", 00:31:14.433 "trsvcid": "$NVMF_PORT", 00:31:14.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.433 "hdgst": ${hdgst:-false}, 00:31:14.433 "ddgst": ${ddgst:-false} 00:31:14.433 }, 00:31:14.433 "method": "bdev_nvme_attach_controller" 00:31:14.433 } 00:31:14.433 EOF 00:31:14.433 )") 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:14.433 { 00:31:14.433 "params": { 00:31:14.433 "name": "Nvme$subsystem", 00:31:14.433 "trtype": "$TEST_TRANSPORT", 00:31:14.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.433 "adrfam": "ipv4", 00:31:14.433 "trsvcid": "$NVMF_PORT", 00:31:14.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.433 "hdgst": ${hdgst:-false}, 00:31:14.433 "ddgst": ${ddgst:-false} 00:31:14.433 }, 00:31:14.433 "method": "bdev_nvme_attach_controller" 00:31:14.433 } 00:31:14.433 EOF 00:31:14.433 )") 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:31:14.433 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:14.433 "params": { 00:31:14.433 "name": "Nvme1", 00:31:14.433 "trtype": "tcp", 00:31:14.433 "traddr": "10.0.0.2", 00:31:14.433 "adrfam": "ipv4", 00:31:14.433 "trsvcid": "4420", 00:31:14.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:14.433 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:14.433 "hdgst": false, 00:31:14.433 "ddgst": false 00:31:14.433 }, 00:31:14.433 "method": "bdev_nvme_attach_controller" 00:31:14.433 },{ 00:31:14.433 "params": { 00:31:14.433 "name": "Nvme2", 00:31:14.433 "trtype": "tcp", 00:31:14.433 "traddr": "10.0.0.2", 00:31:14.433 "adrfam": "ipv4", 00:31:14.433 "trsvcid": "4420", 00:31:14.433 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:14.433 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:14.433 "hdgst": false, 00:31:14.433 "ddgst": false 00:31:14.433 }, 00:31:14.433 "method": "bdev_nvme_attach_controller" 00:31:14.433 },{ 00:31:14.433 "params": { 00:31:14.433 "name": "Nvme3", 00:31:14.433 "trtype": "tcp", 00:31:14.433 "traddr": "10.0.0.2", 00:31:14.433 "adrfam": "ipv4", 00:31:14.433 "trsvcid": "4420", 00:31:14.433 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:14.433 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:14.433 "hdgst": false, 00:31:14.433 "ddgst": false 00:31:14.433 }, 00:31:14.433 "method": "bdev_nvme_attach_controller" 00:31:14.433 },{ 00:31:14.433 "params": { 00:31:14.433 "name": "Nvme4", 00:31:14.433 "trtype": "tcp", 00:31:14.434 "traddr": "10.0.0.2", 00:31:14.434 "adrfam": "ipv4", 00:31:14.434 "trsvcid": "4420", 00:31:14.434 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:14.434 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:14.434 "hdgst": false, 00:31:14.434 "ddgst": false 00:31:14.434 }, 00:31:14.434 "method": "bdev_nvme_attach_controller" 00:31:14.434 },{ 00:31:14.434 "params": { 00:31:14.434 "name": "Nvme5", 00:31:14.434 "trtype": "tcp", 00:31:14.434 "traddr": "10.0.0.2", 00:31:14.434 "adrfam": "ipv4", 00:31:14.434 "trsvcid": "4420", 00:31:14.434 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:14.434 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:14.434 "hdgst": false, 00:31:14.434 "ddgst": false 00:31:14.434 }, 00:31:14.434 "method": "bdev_nvme_attach_controller" 00:31:14.434 },{ 00:31:14.434 "params": { 00:31:14.434 "name": "Nvme6", 00:31:14.434 "trtype": "tcp", 00:31:14.434 "traddr": "10.0.0.2", 00:31:14.434 "adrfam": "ipv4", 00:31:14.434 "trsvcid": "4420", 00:31:14.434 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:14.434 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:14.434 "hdgst": false, 00:31:14.434 "ddgst": false 00:31:14.434 }, 00:31:14.434 "method": "bdev_nvme_attach_controller" 00:31:14.434 },{ 00:31:14.434 "params": { 00:31:14.434 "name": "Nvme7", 00:31:14.434 "trtype": "tcp", 00:31:14.434 "traddr": "10.0.0.2", 00:31:14.434 "adrfam": "ipv4", 00:31:14.434 "trsvcid": "4420", 00:31:14.434 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:14.434 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:14.434 "hdgst": false, 00:31:14.434 "ddgst": false 00:31:14.434 }, 00:31:14.434 "method": "bdev_nvme_attach_controller" 00:31:14.434 },{ 00:31:14.434 "params": { 00:31:14.434 "name": "Nvme8", 00:31:14.434 "trtype": "tcp", 00:31:14.434 "traddr": "10.0.0.2", 00:31:14.434 "adrfam": "ipv4", 00:31:14.434 "trsvcid": "4420", 00:31:14.434 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:14.434 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:14.434 "hdgst": false, 00:31:14.434 "ddgst": false 00:31:14.434 }, 00:31:14.434 "method": "bdev_nvme_attach_controller" 00:31:14.434 },{ 00:31:14.434 "params": { 00:31:14.434 "name": "Nvme9", 00:31:14.434 "trtype": "tcp", 00:31:14.434 "traddr": "10.0.0.2", 00:31:14.434 "adrfam": "ipv4", 00:31:14.434 "trsvcid": "4420", 00:31:14.434 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:14.434 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:14.434 "hdgst": false, 00:31:14.434 "ddgst": false 00:31:14.434 }, 00:31:14.434 "method": "bdev_nvme_attach_controller" 00:31:14.434 },{ 00:31:14.434 "params": { 00:31:14.434 "name": "Nvme10", 00:31:14.434 "trtype": "tcp", 00:31:14.434 "traddr": "10.0.0.2", 00:31:14.434 "adrfam": "ipv4", 00:31:14.434 "trsvcid": "4420", 00:31:14.434 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:14.434 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:14.434 "hdgst": false, 00:31:14.434 "ddgst": false 00:31:14.434 }, 00:31:14.434 "method": "bdev_nvme_attach_controller" 00:31:14.434 }' 00:31:14.434 [2024-10-13 14:27:18.069736] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:14.434 [2024-10-13 14:27:18.116953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.434 [2024-10-13 14:27:18.135253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.345 Running I/O for 10 seconds... 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1856007 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1856007 ']' 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1856007 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1856007 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1856007' 00:31:16.923 killing process with pid 1856007 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1856007 00:31:16.923 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1856007 00:31:16.923 [2024-10-13 14:27:20.608733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.923 [2024-10-13 14:27:20.608907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.608912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.608916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.608921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.608926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.608931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.608936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.608941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.608946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.608950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.608955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.608959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.608964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.608969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.608974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.608978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.608983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.608988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.608992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.608997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.609001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.609006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.609011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.609015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.609020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.609024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.609030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.609035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.609044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.609049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.609053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.609058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.609068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.609073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.609082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.609087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6070 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.924 [2024-10-13 14:27:20.610301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.610305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.610310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.610314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.610320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.610325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.610329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.610334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.610338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.610343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.610347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d49ea0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.611442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6540 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.611454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6540 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.611459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6540 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.613867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd73d0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.614659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.614670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.614675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.614680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.614685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.614689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.614694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.614698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.614703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.614708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.614713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.925 [2024-10-13 14:27:20.614717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.614945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd78a0 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.615996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.616001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.616006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.616010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.616015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.616019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.616024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.616029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.616033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.616038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.616042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.616047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.616051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.926 [2024-10-13 14:27:20.616055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7d70 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.616996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.617001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.617006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.617011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.617015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.617020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.617025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.617029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.617034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.617038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.617043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.617048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.617052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.927 [2024-10-13 14:27:20.617057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8260 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.617868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8730 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.624059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.928 [2024-10-13 14:27:20.624101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.928 [2024-10-13 14:27:20.624112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.928 [2024-10-13 14:27:20.624120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.928 [2024-10-13 14:27:20.624129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.928 [2024-10-13 14:27:20.624136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.928 [2024-10-13 14:27:20.624145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.928 [2024-10-13 14:27:20.624153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.928 [2024-10-13 14:27:20.624160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6bea0 is same with the state(6) to be set 00:31:16.928 [2024-10-13 14:27:20.624198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.928 [2024-10-13 14:27:20.624207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.928 [2024-10-13 14:27:20.624215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81c610 is same with the state(6) to be set 00:31:16.929 [2024-10-13 14:27:20.624291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7a750 is same with the state(6) to be set 00:31:16.929 [2024-10-13 14:27:20.624379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81a520 is same with the state(6) to be set 00:31:16.929 [2024-10-13 14:27:20.624466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6e960 is same with the state(6) to be set 00:31:16.929 [2024-10-13 14:27:20.624556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x812700 is same with the state(6) to be set 00:31:16.929 [2024-10-13 14:27:20.624643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46890 is same with the state(6) to be set 00:31:16.929 [2024-10-13 14:27:20.624729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b200 is same with the state(6) to be set 00:31:16.929 [2024-10-13 14:27:20.624819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81ca70 is same with the state(6) to be set 00:31:16.929 [2024-10-13 14:27:20.624903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:16.929 [2024-10-13 14:27:20.624957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.929 [2024-10-13 14:27:20.624965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x726610 is same with the state(6) to be set 00:31:17.201 [2024-10-13 14:27:20.625328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.625985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.625992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.626002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.626009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.626020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.626028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.201 [2024-10-13 14:27:20.626037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.201 [2024-10-13 14:27:20.626044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:17.202 [2024-10-13 14:27:20.626512] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd0b0d0 was disconnected and freed. reset controller. 00:31:17.202 [2024-10-13 14:27:20.626592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.202 [2024-10-13 14:27:20.626875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.202 [2024-10-13 14:27:20.626885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.626892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.626902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.626909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.626918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.626926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.626935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.626942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.626951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.626958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.626968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.626975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.626984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.626992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.627001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.627009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.627020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.627027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.627036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.627043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.627052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.627060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.627075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.627082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.627092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.203 [2024-10-13 14:27:20.637854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.203 [2024-10-13 14:27:20.637861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.637871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.637878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.637888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.637895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.637905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.637912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.637924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.637931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.637941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.637948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.637957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa228f0 is same with the state(6) to be set 00:31:17.204 [2024-10-13 14:27:20.638012] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa228f0 was disconnected and freed. reset controller. 00:31:17.204 [2024-10-13 14:27:20.638390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6bea0 (9): Bad file descriptor 00:31:17.204 [2024-10-13 14:27:20.638420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81c610 (9): Bad file descriptor 00:31:17.204 [2024-10-13 14:27:20.638438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7a750 (9): Bad file descriptor 00:31:17.204 [2024-10-13 14:27:20.638451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81a520 (9): Bad file descriptor 00:31:17.204 [2024-10-13 14:27:20.638469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6e960 (9): Bad file descriptor 00:31:17.204 [2024-10-13 14:27:20.638489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x812700 (9): Bad file descriptor 00:31:17.204 [2024-10-13 14:27:20.638507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc46890 (9): Bad file descriptor 00:31:17.204 [2024-10-13 14:27:20.638519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6b200 (9): Bad file descriptor 00:31:17.204 [2024-10-13 14:27:20.638533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81ca70 (9): Bad file descriptor 00:31:17.204 [2024-10-13 14:27:20.638549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x726610 (9): Bad file descriptor 00:31:17.204 [2024-10-13 14:27:20.641356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:31:17.204 [2024-10-13 14:27:20.642317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:31:17.204 [2024-10-13 14:27:20.642794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.204 [2024-10-13 14:27:20.642813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81c610 with addr=10.0.0.2, port=4420 00:31:17.204 [2024-10-13 14:27:20.642823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81c610 is same with the state(6) to be set 00:31:17.204 [2024-10-13 14:27:20.643282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.643301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.643316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.643324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.643334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.643341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.643355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.643362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.643372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.643379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.643388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.643395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.643404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.643412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.643421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.643428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.643438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.643445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.643455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.643462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.643470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa21400 is same with the state(6) to be set 00:31:17.204 [2024-10-13 14:27:20.643516] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa21400 was disconnected and freed. reset controller. 00:31:17.204 [2024-10-13 14:27:20.643819] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:17.204 [2024-10-13 14:27:20.643859] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:17.204 [2024-10-13 14:27:20.643896] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:17.204 [2024-10-13 14:27:20.643936] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:17.204 [2024-10-13 14:27:20.643973] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:17.204 [2024-10-13 14:27:20.644010] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:17.204 [2024-10-13 14:27:20.644342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.204 [2024-10-13 14:27:20.644383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc46890 with addr=10.0.0.2, port=4420 00:31:17.204 [2024-10-13 14:27:20.644396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46890 is same with the state(6) to be set 00:31:17.204 [2024-10-13 14:27:20.644414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81c610 (9): Bad file descriptor 00:31:17.204 [2024-10-13 14:27:20.644490] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:17.204 [2024-10-13 14:27:20.645511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.645533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.645552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.645562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.645573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.645582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.645594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.645602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.645614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.645623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.645634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.645643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.645654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.645663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.645674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.645683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.645694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.645703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.645714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.645723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.645734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.645743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.645755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.645763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.645773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.645780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.204 [2024-10-13 14:27:20.645792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.204 [2024-10-13 14:27:20.645799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.645809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.645816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.645826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.645833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.645842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.645850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.645859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.645867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.645877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.645884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.645894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.645901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.645910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.645918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.645927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.645935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.645944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.645952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.645962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.645969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.645978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.645986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.645995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.205 [2024-10-13 14:27:20.646419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.205 [2024-10-13 14:27:20.646426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.646436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.646445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.646454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.646462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.646472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.646479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.646488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.646496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.646505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.646514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.646523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.646531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.646541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.646548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.646558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.646565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.646575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.646582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.646592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.646599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.646609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.646616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.646626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.646633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.646643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.646650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.646662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.646669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.646678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1f170 is same with the state(6) to be set 00:31:17.206 [2024-10-13 14:27:20.646740] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc1f170 was disconnected and freed. reset controller. 00:31:17.206 [2024-10-13 14:27:20.646790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:31:17.206 [2024-10-13 14:27:20.646820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc46890 (9): Bad file descriptor 00:31:17.206 [2024-10-13 14:27:20.646831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:31:17.206 [2024-10-13 14:27:20.646838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:31:17.206 [2024-10-13 14:27:20.646847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:31:17.206 [2024-10-13 14:27:20.648128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.206 [2024-10-13 14:27:20.648143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:31:17.206 [2024-10-13 14:27:20.648504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.206 [2024-10-13 14:27:20.648519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x812700 with addr=10.0.0.2, port=4420 00:31:17.206 [2024-10-13 14:27:20.648529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x812700 is same with the state(6) to be set 00:31:17.206 [2024-10-13 14:27:20.648538] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:31:17.206 [2024-10-13 14:27:20.648545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:31:17.206 [2024-10-13 14:27:20.648554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:31:17.206 [2024-10-13 14:27:20.648889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.206 [2024-10-13 14:27:20.649346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.206 [2024-10-13 14:27:20.649386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6e960 with addr=10.0.0.2, port=4420 00:31:17.206 [2024-10-13 14:27:20.649397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6e960 is same with the state(6) to be set 00:31:17.206 [2024-10-13 14:27:20.649412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x812700 (9): Bad file descriptor 00:31:17.206 [2024-10-13 14:27:20.649802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6e960 (9): Bad file descriptor 00:31:17.206 [2024-10-13 14:27:20.649817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:31:17.206 [2024-10-13 14:27:20.649824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:31:17.206 [2024-10-13 14:27:20.649832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:31:17.206 [2024-10-13 14:27:20.649876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.649887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.649902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.649914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.649925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.649932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.649942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.649949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.649958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.649966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.649975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.649982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.649993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.650000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.650010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.650017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.650027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.650034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.650044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.650051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.650060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.650076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.650086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.650093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.650103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.650110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.650120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.650127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.650139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.650146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.650156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.650163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.650173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.206 [2024-10-13 14:27:20.650181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.206 [2024-10-13 14:27:20.650190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.207 [2024-10-13 14:27:20.650900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.207 [2024-10-13 14:27:20.650910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.650917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.650926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.650934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.650943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.650951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.650960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.650968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.650977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.650984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.650993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd09e90 is same with the state(6) to be set 00:31:17.208 [2024-10-13 14:27:20.652289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.208 [2024-10-13 14:27:20.652934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.208 [2024-10-13 14:27:20.652942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.652952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.652961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.652971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.652979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.652988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.652996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.653413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.653421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1dbf0 is same with the state(6) to be set 00:31:17.209 [2024-10-13 14:27:20.654691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.654704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.654717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.654725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.654737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.654746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.654757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.209 [2024-10-13 14:27:20.654766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.209 [2024-10-13 14:27:20.654777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.654785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.654794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.654802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.654811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.654819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.654828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.654835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.654845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.654852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.654861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.654869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.654878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.654888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.654897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.654905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.654915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.654922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.654931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.654939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.654948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.654955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.654965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.654972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.654981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.654989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.654998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.210 [2024-10-13 14:27:20.655485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.210 [2024-10-13 14:27:20.655492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.655502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.655509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.655518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.655526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.655537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.655544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.655554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.655561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.655571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.655578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.655587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.655595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.655604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.655612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.655621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.655629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.655638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.655646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.655655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.655663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.655672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.655681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.655690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.655698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.655707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.655715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.655725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.655732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.655742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.655751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.655761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.655768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.655777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.655785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.655794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.655802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.655810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc206f0 is same with the state(6) to be set 00:31:17.211 [2024-10-13 14:27:20.657079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.211 [2024-10-13 14:27:20.657500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.211 [2024-10-13 14:27:20.657507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.657989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.657998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.658006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.658015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.658022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.658031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.658039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.658048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.658056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.658071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.658079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.658089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.658096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.658106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.658113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.658125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.658132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.658142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.658150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.658159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.658167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.658176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.658183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.658193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.658201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.658209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc21c70 is same with the state(6) to be set 00:31:17.212 [2024-10-13 14:27:20.659473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.212 [2024-10-13 14:27:20.659486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.212 [2024-10-13 14:27:20.659499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.659983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.659990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.660000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.660007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.660016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.660024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.660034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.660041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.660052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.660059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.660072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.660079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.660089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.660096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.660106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.660113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.660122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.660129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.660139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.213 [2024-10-13 14:27:20.660146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.213 [2024-10-13 14:27:20.660156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.660590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.660598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc230b0 is same with the state(6) to be set 00:31:17.214 [2024-10-13 14:27:20.661862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.661876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.661889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.661898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.661909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.661919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.661930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.661939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.661950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.661958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.661971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.661978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.661988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.661996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.662005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.662012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.662022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.662030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.662039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.662046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.662056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.662067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.662078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.662085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.662094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.662102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.662111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.662119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.662129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.662136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.214 [2024-10-13 14:27:20.662145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.214 [2024-10-13 14:27:20.662152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.215 [2024-10-13 14:27:20.662867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.215 [2024-10-13 14:27:20.662874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.216 [2024-10-13 14:27:20.662884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.216 [2024-10-13 14:27:20.662891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.216 [2024-10-13 14:27:20.662900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.216 [2024-10-13 14:27:20.662907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.216 [2024-10-13 14:27:20.662917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.216 [2024-10-13 14:27:20.662924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.216 [2024-10-13 14:27:20.662934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.216 [2024-10-13 14:27:20.662941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.216 [2024-10-13 14:27:20.662951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.216 [2024-10-13 14:27:20.662958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.216 [2024-10-13 14:27:20.662968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.216 [2024-10-13 14:27:20.662976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.216 [2024-10-13 14:27:20.662984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc24630 is same with the state(6) to be set 00:31:17.216 [2024-10-13 14:27:20.664771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.216 [2024-10-13 14:27:20.664794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.216 [2024-10-13 14:27:20.664806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:31:17.216 [2024-10-13 14:27:20.664815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:31:17.216 [2024-10-13 14:27:20.664849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:31:17.216 [2024-10-13 14:27:20.664857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:31:17.216 [2024-10-13 14:27:20.664866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:31:17.216 [2024-10-13 14:27:20.664917] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:17.216 [2024-10-13 14:27:20.664934] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:17.216 [2024-10-13 14:27:20.664945] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:17.216 [2024-10-13 14:27:20.664957] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:17.216 [2024-10-13 14:27:20.665036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:31:17.216 [2024-10-13 14:27:20.665047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:31:17.216 task offset: 27904 on job bdev=Nvme2n1 fails 00:31:17.216 00:31:17.216 Latency(us) 00:31:17.216 [2024-10-13T12:27:20.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.216 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:17.216 Job: Nvme1n1 ended in about 0.92 seconds with error 00:31:17.216 Verification LBA range: start 0x0 length 0x400 00:31:17.216 Nvme1n1 : 0.92 207.67 12.98 69.22 0.00 228439.59 15765.43 219840.11 00:31:17.216 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:17.216 Job: Nvme2n1 ended in about 0.91 seconds with error 00:31:17.216 Verification LBA range: start 0x0 length 0x400 00:31:17.216 Nvme2n1 : 0.91 210.50 13.16 70.17 0.00 220599.85 13630.52 250495.10 00:31:17.216 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:17.216 Job: Nvme3n1 ended in about 0.92 seconds with error 00:31:17.216 Verification LBA range: start 0x0 length 0x400 00:31:17.216 Nvme3n1 : 0.92 209.20 13.08 10.90 0.00 274713.42 14342.16 257501.96 00:31:17.216 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:17.216 Job: Nvme4n1 ended in about 0.91 seconds with error 00:31:17.216 Verification LBA range: start 0x0 length 0x400 00:31:17.216 Nvme4n1 : 0.91 210.22 13.14 70.07 0.00 211422.60 16422.32 248743.39 00:31:17.216 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:17.216 Job: Nvme5n1 ended in about 0.93 seconds with error 00:31:17.216 Verification LBA range: start 0x0 length 0x400 00:31:17.216 Nvme5n1 : 0.93 143.48 8.97 69.04 0.00 273094.82 17407.66 241736.53 00:31:17.216 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:17.216 Job: Nvme6n1 ended in about 0.92 seconds with error 00:31:17.216 Verification LBA range: start 0x0 length 0x400 00:31:17.216 Nvme6n1 : 0.92 208.60 13.04 69.53 0.00 203749.17 6021.52 239984.82 00:31:17.216 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:17.216 Job: Nvme7n1 ended in about 0.93 seconds with error 00:31:17.216 Verification LBA range: start 0x0 length 0x400 00:31:17.216 Nvme7n1 : 0.93 137.73 8.61 68.87 0.00 268413.96 18502.48 252246.82 00:31:17.216 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:17.216 Job: Nvme8n1 ended in about 0.93 seconds with error 00:31:17.216 Verification LBA range: start 0x0 length 0x400 00:31:17.216 Nvme8n1 : 0.93 206.07 12.88 68.69 0.00 197068.89 14780.09 250495.10 00:31:17.216 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:17.216 Job: Nvme9n1 ended in about 0.93 seconds with error 00:31:17.216 Verification LBA range: start 0x0 length 0x400 00:31:17.216 Nvme9n1 : 0.93 137.03 8.56 68.51 0.00 257321.20 19268.85 248743.39 00:31:17.216 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:17.216 Job: Nvme10n1 ended in about 0.94 seconds with error 00:31:17.216 Verification LBA range: start 0x0 length 0x400 00:31:17.216 Nvme10n1 : 0.94 136.68 8.54 68.34 0.00 251760.13 17517.14 259253.67 00:31:17.216 [2024-10-13T12:27:20.923Z] =================================================================================================================== 00:31:17.216 [2024-10-13T12:27:20.923Z] Total : 1807.17 112.95 633.34 0.00 235147.94 6021.52 259253.67 00:31:17.216 [2024-10-13 14:27:20.692757] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:17.216 [2024-10-13 14:27:20.692809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:31:17.216 [2024-10-13 14:27:20.692828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.216 [2024-10-13 14:27:20.693266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.216 [2024-10-13 14:27:20.693286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81ca70 with addr=10.0.0.2, port=4420 00:31:17.216 [2024-10-13 14:27:20.693296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81ca70 is same with the state(6) to be set 00:31:17.216 [2024-10-13 14:27:20.693622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.216 [2024-10-13 14:27:20.693633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81a520 with addr=10.0.0.2, port=4420 00:31:17.216 [2024-10-13 14:27:20.693640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81a520 is same with the state(6) to be set 00:31:17.216 [2024-10-13 14:27:20.693844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.216 [2024-10-13 14:27:20.693854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x726610 with addr=10.0.0.2, port=4420 00:31:17.216 [2024-10-13 14:27:20.693861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x726610 is same with the state(6) to be set 00:31:17.216 [2024-10-13 14:27:20.695540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:31:17.216 [2024-10-13 14:27:20.695558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:31:17.216 [2024-10-13 14:27:20.695794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.216 [2024-10-13 14:27:20.695809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc7a750 with addr=10.0.0.2, port=4420 00:31:17.216 [2024-10-13 14:27:20.695816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7a750 is same with the state(6) to be set 00:31:17.216 [2024-10-13 14:27:20.696091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.216 [2024-10-13 14:27:20.696101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6bea0 with addr=10.0.0.2, port=4420 00:31:17.216 [2024-10-13 14:27:20.696109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6bea0 is same with the state(6) to be set 00:31:17.216 [2024-10-13 14:27:20.696394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.216 [2024-10-13 14:27:20.696404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6b200 with addr=10.0.0.2, port=4420 00:31:17.216 [2024-10-13 14:27:20.696411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b200 is same with the state(6) to be set 00:31:17.216 [2024-10-13 14:27:20.696423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81ca70 (9): Bad file descriptor 00:31:17.216 [2024-10-13 14:27:20.696436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81a520 (9): Bad file descriptor 00:31:17.216 [2024-10-13 14:27:20.696445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x726610 (9): Bad file descriptor 00:31:17.216 [2024-10-13 14:27:20.696474] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:17.216 [2024-10-13 14:27:20.696499] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:17.216 [2024-10-13 14:27:20.696511] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:17.216 [2024-10-13 14:27:20.696521] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:17.216 [2024-10-13 14:27:20.696800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:31:17.216 [2024-10-13 14:27:20.697178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.216 [2024-10-13 14:27:20.697192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x81c610 with addr=10.0.0.2, port=4420 00:31:17.216 [2024-10-13 14:27:20.697200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81c610 is same with the state(6) to be set 00:31:17.216 [2024-10-13 14:27:20.697520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.216 [2024-10-13 14:27:20.697530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc46890 with addr=10.0.0.2, port=4420 00:31:17.216 [2024-10-13 14:27:20.697537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46890 is same with the state(6) to be set 00:31:17.216 [2024-10-13 14:27:20.697546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7a750 (9): Bad file descriptor 00:31:17.216 [2024-10-13 14:27:20.697556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6bea0 (9): Bad file descriptor 00:31:17.217 [2024-10-13 14:27:20.697565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6b200 (9): Bad file descriptor 00:31:17.217 [2024-10-13 14:27:20.697574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.217 [2024-10-13 14:27:20.697580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.217 [2024-10-13 14:27:20.697588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.217 [2024-10-13 14:27:20.697601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:31:17.217 [2024-10-13 14:27:20.697607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:31:17.217 [2024-10-13 14:27:20.697614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:31:17.217 [2024-10-13 14:27:20.697625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:31:17.217 [2024-10-13 14:27:20.697631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:31:17.217 [2024-10-13 14:27:20.697638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:31:17.217 [2024-10-13 14:27:20.697711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:31:17.217 [2024-10-13 14:27:20.697722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.217 [2024-10-13 14:27:20.697729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.217 [2024-10-13 14:27:20.697736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.217 [2024-10-13 14:27:20.697926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.217 [2024-10-13 14:27:20.697939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x812700 with addr=10.0.0.2, port=4420 00:31:17.217 [2024-10-13 14:27:20.697946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x812700 is same with the state(6) to be set 00:31:17.217 [2024-10-13 14:27:20.697955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81c610 (9): Bad file descriptor 00:31:17.217 [2024-10-13 14:27:20.697965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc46890 (9): Bad file descriptor 00:31:17.217 [2024-10-13 14:27:20.697972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:31:17.217 [2024-10-13 14:27:20.697979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:31:17.217 [2024-10-13 14:27:20.697986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:31:17.217 [2024-10-13 14:27:20.698000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:31:17.217 [2024-10-13 14:27:20.698006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:31:17.217 [2024-10-13 14:27:20.698013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:31:17.217 [2024-10-13 14:27:20.698022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:31:17.217 [2024-10-13 14:27:20.698029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:31:17.217 [2024-10-13 14:27:20.698036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:31:17.217 [2024-10-13 14:27:20.698071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.217 [2024-10-13 14:27:20.698081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.217 [2024-10-13 14:27:20.698088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.217 [2024-10-13 14:27:20.698418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.217 [2024-10-13 14:27:20.698429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6e960 with addr=10.0.0.2, port=4420 00:31:17.217 [2024-10-13 14:27:20.698436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6e960 is same with the state(6) to be set 00:31:17.217 [2024-10-13 14:27:20.698445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x812700 (9): Bad file descriptor 00:31:17.217 [2024-10-13 14:27:20.698454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:31:17.217 [2024-10-13 14:27:20.698460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:31:17.217 [2024-10-13 14:27:20.698467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:31:17.217 [2024-10-13 14:27:20.698476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:31:17.217 [2024-10-13 14:27:20.698483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:31:17.217 [2024-10-13 14:27:20.698490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:31:17.217 [2024-10-13 14:27:20.698518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.217 [2024-10-13 14:27:20.698525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.217 [2024-10-13 14:27:20.698533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6e960 (9): Bad file descriptor 00:31:17.217 [2024-10-13 14:27:20.698541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:31:17.217 [2024-10-13 14:27:20.698548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:31:17.217 [2024-10-13 14:27:20.698555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:31:17.217 [2024-10-13 14:27:20.698584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.217 [2024-10-13 14:27:20.698591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:31:17.217 [2024-10-13 14:27:20.698598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:31:17.217 [2024-10-13 14:27:20.698605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:31:17.217 [2024-10-13 14:27:20.698636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.217 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:31:18.161 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1856229 00:31:18.161 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:31:18.161 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1856229 00:31:18.161 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:31:18.161 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:18.161 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:31:18.161 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:18.161 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1856229 00:31:18.161 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:31:18.161 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:18.162 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:31:18.162 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:31:18.162 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:31:18.162 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:18.162 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:31:18.162 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:31:18.162 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:18.423 rmmod nvme_tcp 00:31:18.423 rmmod nvme_fabrics 00:31:18.423 rmmod nvme_keyring 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 1856007 ']' 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 1856007 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1856007 ']' 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1856007 00:31:18.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1856007) - No such process 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1856007 is not found' 00:31:18.423 Process with pid 1856007 is not found 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.423 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:20.970 00:31:20.970 real 0m7.988s 00:31:20.970 user 0m19.462s 00:31:20.970 sys 0m1.295s 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:20.970 ************************************ 00:31:20.970 END TEST nvmf_shutdown_tc3 00:31:20.970 ************************************ 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:20.970 ************************************ 00:31:20.970 START TEST nvmf_shutdown_tc4 00:31:20.970 ************************************ 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:20.970 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:20.971 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:20.971 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:20.971 Found net devices under 0000:31:00.0: cvl_0_0 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:20.971 Found net devices under 0000:31:00.1: cvl_0_1 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:20.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:31:20.971 00:31:20.971 --- 10.0.0.2 ping statistics --- 00:31:20.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.971 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:31:20.971 00:31:20.971 --- 10.0.0.1 ping statistics --- 00:31:20.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.971 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=1857635 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 1857635 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 1857635 ']' 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.971 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:20.972 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:20.972 [2024-10-13 14:27:24.569842] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:31:20.972 [2024-10-13 14:27:24.569899] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.233 [2024-10-13 14:27:24.708990] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:21.233 [2024-10-13 14:27:24.755457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:21.233 [2024-10-13 14:27:24.772433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.233 [2024-10-13 14:27:24.772463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.233 [2024-10-13 14:27:24.772469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.233 [2024-10-13 14:27:24.772473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.233 [2024-10-13 14:27:24.772477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.233 [2024-10-13 14:27:24.774046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:21.233 [2024-10-13 14:27:24.774199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:21.233 [2024-10-13 14:27:24.774438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.233 [2024-10-13 14:27:24.774439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:21.803 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:21.803 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:31:21.803 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:21.803 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:21.803 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:21.803 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.803 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:21.803 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.803 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:21.804 [2024-10-13 14:27:25.421057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.804 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:22.064 Malloc1 00:31:22.064 [2024-10-13 14:27:25.532397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.064 Malloc2 00:31:22.064 Malloc3 00:31:22.064 Malloc4 00:31:22.064 Malloc5 00:31:22.064 Malloc6 00:31:22.064 Malloc7 00:31:22.325 Malloc8 00:31:22.325 Malloc9 00:31:22.325 Malloc10 00:31:22.325 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.325 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:31:22.325 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:22.325 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:22.325 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1858014 00:31:22.325 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:31:22.325 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:31:22.586 [2024-10-13 14:27:26.105489] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:27.879 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:27.879 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1857635 00:31:27.879 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1857635 ']' 00:31:27.879 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1857635 00:31:27.879 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:31:27.879 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:27.879 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1857635 00:31:27.879 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:27.879 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:27.879 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1857635' 00:31:27.879 killing process with pid 1857635 00:31:27.879 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 1857635 00:31:27.879 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 1857635 00:31:27.879 [2024-10-13 14:27:31.012024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602420 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.012075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602420 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.012082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602420 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.012087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602420 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.012092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602420 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.012099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602420 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.012369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16028f0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.012402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16028f0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.012411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16028f0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.012416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16028f0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.012424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16028f0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.012429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16028f0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.012736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602dc0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.012757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602dc0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.012763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602dc0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.012768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602dc0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.012773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602dc0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.012779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602dc0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.012789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602dc0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.012794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1602dc0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.013204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601f50 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.013230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601f50 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.013236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601f50 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.013761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186e9e0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.013778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186e9e0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.013783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186e9e0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.013788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186e9e0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186eeb0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186eeb0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186eeb0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186eeb0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186eeb0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186eeb0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186eeb0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186eeb0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186eeb0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186eeb0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186eeb0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186eeb0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186eeb0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186eeb0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186eeb0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186eeb0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186eeb0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186eeb0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186eeb0 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186f380 is same with the state(6) to be set 00:31:27.879 [2024-10-13 14:27:31.014656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186f380 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.014661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186f380 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.014666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186f380 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.014671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186f380 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.014676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186f380 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.014681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186f380 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.015128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186e510 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.015145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186e510 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.015151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186e510 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.015155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186e510 is same with the state(6) to be set 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 [2024-10-13 14:27:31.016434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186fbd0 is same with the state(6) to be set 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 [2024-10-13 14:27:31.016449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186fbd0 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.016455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186fbd0 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.016459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186fbd0 is same with the state(6) to be set 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 [2024-10-13 14:27:31.016464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186fbd0 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.016469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186fbd0 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.016474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186fbd0 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.016479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186fbd0 is same with the state(6) to be set 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 [2024-10-13 14:27:31.016706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18700a0 is same with the state(6) to be set 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 [2024-10-13 14:27:31.016721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18700a0 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.016726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18700a0 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.016731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18700a0 is same with the state(6) to be set 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 [2024-10-13 14:27:31.016736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18700a0 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.016741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18700a0 is same with the state(6) to be set 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 [2024-10-13 14:27:31.016925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.880 [2024-10-13 14:27:31.016976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1870570 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.016991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1870570 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.016996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1870570 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.017001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1870570 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.017006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1870570 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.017011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1870570 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.017016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1870570 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.017021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1870570 is same with the state(6) to be set 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 [2024-10-13 14:27:31.017384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186f700 is same with the state(6) to be set 00:31:27.880 starting I/O failed: -6 00:31:27.880 [2024-10-13 14:27:31.017398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186f700 is same with the state(6) to be set 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 [2024-10-13 14:27:31.017403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186f700 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.017409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186f700 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.017414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186f700 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.017419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186f700 is same with the state(6) to be set 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 [2024-10-13 14:27:31.017424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186f700 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.017429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186f700 is same with the state(6) to be set 00:31:27.880 [2024-10-13 14:27:31.017434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186f700 is same with the state(6) to be set 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 [2024-10-13 14:27:31.017438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186f700 is same with the state(6) to be set 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 [2024-10-13 14:27:31.017770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.880 starting I/O failed: -6 00:31:27.880 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 [2024-10-13 14:27:31.018685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 [2024-10-13 14:27:31.020332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:27.881 NVMe io qpair process completion error 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 starting I/O failed: -6 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.881 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 [2024-10-13 14:27:31.021357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 [2024-10-13 14:27:31.022163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 [2024-10-13 14:27:31.023105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.882 starting I/O failed: -6 00:31:27.882 starting I/O failed: -6 00:31:27.882 starting I/O failed: -6 00:31:27.882 starting I/O failed: -6 00:31:27.882 starting I/O failed: -6 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.882 starting I/O failed: -6 00:31:27.882 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 [2024-10-13 14:27:31.024933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:27.883 NVMe io qpair process completion error 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 [2024-10-13 14:27:31.025972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:27.883 starting I/O failed: -6 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 [2024-10-13 14:27:31.026847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.883 starting I/O failed: -6 00:31:27.883 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 [2024-10-13 14:27:31.027785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 [2024-10-13 14:27:31.030620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:27.884 NVMe io qpair process completion error 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 starting I/O failed: -6 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.884 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 [2024-10-13 14:27:31.031667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 [2024-10-13 14:27:31.032510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 [2024-10-13 14:27:31.034084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.885 starting I/O failed: -6 00:31:27.885 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 [2024-10-13 14:27:31.035511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:27.886 NVMe io qpair process completion error 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 [2024-10-13 14:27:31.036606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.886 starting I/O failed: -6 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 [2024-10-13 14:27:31.037545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.886 starting I/O failed: -6 00:31:27.886 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 [2024-10-13 14:27:31.038476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 [2024-10-13 14:27:31.041345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:27.887 NVMe io qpair process completion error 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 [2024-10-13 14:27:31.042559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.887 starting I/O failed: -6 00:31:27.887 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 [2024-10-13 14:27:31.043460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 [2024-10-13 14:27:31.044384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.888 starting I/O failed: -6 00:31:27.888 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 [2024-10-13 14:27:31.046629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:27.889 NVMe io qpair process completion error 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 [2024-10-13 14:27:31.047769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 [2024-10-13 14:27:31.048583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 [2024-10-13 14:27:31.049529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.889 Write completed with error (sct=0, sc=8) 00:31:27.889 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 [2024-10-13 14:27:31.051178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.890 NVMe io qpair process completion error 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 [2024-10-13 14:27:31.052242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 [2024-10-13 14:27:31.053046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.890 starting I/O failed: -6 00:31:27.890 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 [2024-10-13 14:27:31.053970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 [2024-10-13 14:27:31.056015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:27.891 NVMe io qpair process completion error 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 starting I/O failed: -6 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.891 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 [2024-10-13 14:27:31.057299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 [2024-10-13 14:27:31.058280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 [2024-10-13 14:27:31.059216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.892 starting I/O failed: -6 00:31:27.892 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 [2024-10-13 14:27:31.060872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:27.893 NVMe io qpair process completion error 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 [2024-10-13 14:27:31.061900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 [2024-10-13 14:27:31.062736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.893 starting I/O failed: -6 00:31:27.893 Write completed with error (sct=0, sc=8) 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 [2024-10-13 14:27:31.063695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 Write completed with error (sct=0, sc=8) 00:31:27.894 starting I/O failed: -6 00:31:27.894 [2024-10-13 14:27:31.067732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:27.894 NVMe io qpair process completion error 00:31:27.894 Initializing NVMe Controllers 00:31:27.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:31:27.894 Controller IO queue size 128, less than required. 00:31:27.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:27.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:31:27.894 Controller IO queue size 128, less than required. 00:31:27.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:27.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:31:27.894 Controller IO queue size 128, less than required. 00:31:27.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:27.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:31:27.894 Controller IO queue size 128, less than required. 00:31:27.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:27.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:31:27.894 Controller IO queue size 128, less than required. 00:31:27.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:27.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:31:27.894 Controller IO queue size 128, less than required. 00:31:27.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:27.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:31:27.894 Controller IO queue size 128, less than required. 00:31:27.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:27.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:31:27.894 Controller IO queue size 128, less than required. 00:31:27.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:27.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:31:27.894 Controller IO queue size 128, less than required. 00:31:27.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:27.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:27.894 Controller IO queue size 128, less than required. 00:31:27.895 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:27.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:31:27.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:31:27.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:31:27.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:31:27.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:31:27.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:31:27.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:31:27.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:31:27.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:31:27.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:27.895 Initialization complete. Launching workers. 00:31:27.895 ======================================================== 00:31:27.895 Latency(us) 00:31:27.895 Device Information : IOPS MiB/s Average min max 00:31:27.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1881.30 80.84 68055.93 698.09 119103.26 00:31:27.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1874.81 80.56 68313.60 626.79 128393.91 00:31:27.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1881.30 80.84 68122.62 680.89 120602.36 00:31:27.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1892.11 81.30 67755.06 701.35 126564.82 00:31:27.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1841.94 79.15 69643.96 902.72 120354.00 00:31:27.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1868.32 80.28 68691.89 833.31 126374.39 00:31:27.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1859.67 79.91 69031.74 807.70 127426.95 00:31:27.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1902.71 81.76 67502.08 966.59 133655.88 00:31:27.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1894.49 81.40 67818.12 659.78 130359.14 00:31:27.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1873.51 80.50 67864.77 890.41 128595.99 00:31:27.895 ======================================================== 00:31:27.895 Total : 18770.18 806.53 68274.61 626.79 133655.88 00:31:27.895 00:31:27.895 [2024-10-13 14:27:31.072293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be6f40 is same with the state(6) to be set 00:31:27.895 [2024-10-13 14:27:31.072338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be7270 is same with the state(6) to be set 00:31:27.895 [2024-10-13 14:27:31.072368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c02bb0 is same with the state(6) to be set 00:31:27.895 [2024-10-13 14:27:31.072398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bec650 is same with the state(6) to be set 00:31:27.895 [2024-10-13 14:27:31.072426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bec830 is same with the state(6) to be set 00:31:27.895 [2024-10-13 14:27:31.072458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be5af0 is same with the state(6) to be set 00:31:27.895 [2024-10-13 14:27:31.072487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bea070 is same with the state(6) to be set 00:31:27.895 [2024-10-13 14:27:31.072515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be5910 is same with the state(6) to be set 00:31:27.895 [2024-10-13 14:27:31.072543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bec470 is same with the state(6) to be set 00:31:27.895 [2024-10-13 14:27:31.072571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be9e90 is same with the state(6) to be set 00:31:27.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:27.895 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1858014 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1858014 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1858014 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:31:28.838 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:28.839 rmmod nvme_tcp 00:31:28.839 rmmod nvme_fabrics 00:31:28.839 rmmod nvme_keyring 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 1857635 ']' 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 1857635 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1857635 ']' 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1857635 00:31:28.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1857635) - No such process 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1857635 is not found' 00:31:28.839 Process with pid 1857635 is not found 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:28.839 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.751 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:30.751 00:31:30.751 real 0m10.296s 00:31:30.751 user 0m27.699s 00:31:30.751 sys 0m3.857s 00:31:30.751 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:30.751 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:31:30.751 ************************************ 00:31:30.751 END TEST nvmf_shutdown_tc4 00:31:30.751 ************************************ 00:31:31.013 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:31:31.013 00:31:31.013 real 0m44.364s 00:31:31.013 user 1m46.828s 00:31:31.013 sys 0m13.888s 00:31:31.013 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:31.013 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:31.013 ************************************ 00:31:31.013 END TEST nvmf_shutdown 00:31:31.013 ************************************ 00:31:31.013 14:27:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:31.013 00:31:31.013 real 19m45.142s 00:31:31.013 user 51m41.132s 00:31:31.013 sys 4m51.570s 00:31:31.013 14:27:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:31.013 14:27:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:31.013 ************************************ 00:31:31.013 END TEST nvmf_target_extra 00:31:31.013 ************************************ 00:31:31.013 14:27:34 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:31:31.013 14:27:34 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:31.013 14:27:34 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:31.013 14:27:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:31.013 ************************************ 00:31:31.013 START TEST nvmf_host 00:31:31.013 ************************************ 00:31:31.013 14:27:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:31:31.013 * Looking for test storage... 00:31:31.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:31.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.275 --rc genhtml_branch_coverage=1 00:31:31.275 --rc genhtml_function_coverage=1 00:31:31.275 --rc genhtml_legend=1 00:31:31.275 --rc geninfo_all_blocks=1 00:31:31.275 --rc geninfo_unexecuted_blocks=1 00:31:31.275 00:31:31.275 ' 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:31.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.275 --rc genhtml_branch_coverage=1 00:31:31.275 --rc genhtml_function_coverage=1 00:31:31.275 --rc genhtml_legend=1 00:31:31.275 --rc geninfo_all_blocks=1 00:31:31.275 --rc geninfo_unexecuted_blocks=1 00:31:31.275 00:31:31.275 ' 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:31.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.275 --rc genhtml_branch_coverage=1 00:31:31.275 --rc genhtml_function_coverage=1 00:31:31.275 --rc genhtml_legend=1 00:31:31.275 --rc geninfo_all_blocks=1 00:31:31.275 --rc geninfo_unexecuted_blocks=1 00:31:31.275 00:31:31.275 ' 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:31.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.275 --rc genhtml_branch_coverage=1 00:31:31.275 --rc genhtml_function_coverage=1 00:31:31.275 --rc genhtml_legend=1 00:31:31.275 --rc geninfo_all_blocks=1 00:31:31.275 --rc geninfo_unexecuted_blocks=1 00:31:31.275 00:31:31.275 ' 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:31.275 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:31.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:31.276 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:31.276 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:31.276 14:27:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:31.276 14:27:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:31.276 14:27:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:31:31.276 14:27:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:31:31.276 14:27:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:31.276 14:27:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:31.276 14:27:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:31.276 14:27:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.276 ************************************ 00:31:31.276 START TEST nvmf_multicontroller 00:31:31.276 ************************************ 00:31:31.276 14:27:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:31.538 * Looking for test storage... 00:31:31.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:31.538 14:27:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:31.538 14:27:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:31:31.538 14:27:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:31.538 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:31.538 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:31.538 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:31.538 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:31.538 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:31:31.538 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:31:31.538 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:31:31.538 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:31:31.538 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:31:31.538 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:31:31.538 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:31:31.538 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:31.538 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:31:31.538 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:31:31.538 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:31.538 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:31.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.539 --rc genhtml_branch_coverage=1 00:31:31.539 --rc genhtml_function_coverage=1 00:31:31.539 --rc genhtml_legend=1 00:31:31.539 --rc geninfo_all_blocks=1 00:31:31.539 --rc geninfo_unexecuted_blocks=1 00:31:31.539 00:31:31.539 ' 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:31.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.539 --rc genhtml_branch_coverage=1 00:31:31.539 --rc genhtml_function_coverage=1 00:31:31.539 --rc genhtml_legend=1 00:31:31.539 --rc geninfo_all_blocks=1 00:31:31.539 --rc geninfo_unexecuted_blocks=1 00:31:31.539 00:31:31.539 ' 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:31.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.539 --rc genhtml_branch_coverage=1 00:31:31.539 --rc genhtml_function_coverage=1 00:31:31.539 --rc genhtml_legend=1 00:31:31.539 --rc geninfo_all_blocks=1 00:31:31.539 --rc geninfo_unexecuted_blocks=1 00:31:31.539 00:31:31.539 ' 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:31.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.539 --rc genhtml_branch_coverage=1 00:31:31.539 --rc genhtml_function_coverage=1 00:31:31.539 --rc genhtml_legend=1 00:31:31.539 --rc geninfo_all_blocks=1 00:31:31.539 --rc geninfo_unexecuted_blocks=1 00:31:31.539 00:31:31.539 ' 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:31.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:31.539 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:31:31.540 14:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:39.685 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:39.685 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:31:39.685 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:39.685 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:39.685 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:39.685 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:39.685 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:39.685 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:31:39.685 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:39.685 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:31:39.685 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:31:39.685 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:31:39.685 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:31:39.685 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:31:39.685 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:31:39.685 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:39.685 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:39.686 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:39.686 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:39.686 Found net devices under 0000:31:00.0: cvl_0_0 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:39.686 Found net devices under 0000:31:00.1: cvl_0_1 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:39.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:39.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:31:39.686 00:31:39.686 --- 10.0.0.2 ping statistics --- 00:31:39.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.686 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:39.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:39.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:31:39.686 00:31:39.686 --- 10.0.0.1 ping statistics --- 00:31:39.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.686 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=1863498 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 1863498 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1863498 ']' 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.686 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:39.687 14:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:39.687 [2024-10-13 14:27:42.887712] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:31:39.687 [2024-10-13 14:27:42.887773] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:39.687 [2024-10-13 14:27:43.032419] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:39.687 [2024-10-13 14:27:43.080431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:39.687 [2024-10-13 14:27:43.115512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:39.687 [2024-10-13 14:27:43.115570] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:39.687 [2024-10-13 14:27:43.115583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:39.687 [2024-10-13 14:27:43.115594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:39.687 [2024-10-13 14:27:43.115603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:39.687 [2024-10-13 14:27:43.118039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:39.687 [2024-10-13 14:27:43.118201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:39.687 [2024-10-13 14:27:43.118208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:40.260 [2024-10-13 14:27:43.771944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:40.260 Malloc0 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:40.260 [2024-10-13 14:27:43.853636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.260 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:40.261 [2024-10-13 14:27:43.865525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:40.261 Malloc1 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1863842 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1863842 /var/tmp/bdevperf.sock 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1863842 ']' 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:40.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:40.261 14:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:41.205 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:41.205 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:31:41.205 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:31:41.205 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.205 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:41.205 NVMe0n1 00:31:41.205 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.205 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:41.205 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:31:41.205 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.205 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.466 1 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:41.466 request: 00:31:41.466 { 00:31:41.466 "name": "NVMe0", 00:31:41.466 "trtype": "tcp", 00:31:41.466 "traddr": "10.0.0.2", 00:31:41.466 "adrfam": "ipv4", 00:31:41.466 "trsvcid": "4420", 00:31:41.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:41.466 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:31:41.466 "hostaddr": "10.0.0.1", 00:31:41.466 "prchk_reftag": false, 00:31:41.466 "prchk_guard": false, 00:31:41.466 "hdgst": false, 00:31:41.466 "ddgst": false, 00:31:41.466 "allow_unrecognized_csi": false, 00:31:41.466 "method": "bdev_nvme_attach_controller", 00:31:41.466 "req_id": 1 00:31:41.466 } 00:31:41.466 Got JSON-RPC error response 00:31:41.466 response: 00:31:41.466 { 00:31:41.466 "code": -114, 00:31:41.466 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:41.466 } 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:41.466 request: 00:31:41.466 { 00:31:41.466 "name": "NVMe0", 00:31:41.466 "trtype": "tcp", 00:31:41.466 "traddr": "10.0.0.2", 00:31:41.466 "adrfam": "ipv4", 00:31:41.466 "trsvcid": "4420", 00:31:41.466 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:41.466 "hostaddr": "10.0.0.1", 00:31:41.466 "prchk_reftag": false, 00:31:41.466 "prchk_guard": false, 00:31:41.466 "hdgst": false, 00:31:41.466 "ddgst": false, 00:31:41.466 "allow_unrecognized_csi": false, 00:31:41.466 "method": "bdev_nvme_attach_controller", 00:31:41.466 "req_id": 1 00:31:41.466 } 00:31:41.466 Got JSON-RPC error response 00:31:41.466 response: 00:31:41.466 { 00:31:41.466 "code": -114, 00:31:41.466 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:41.466 } 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:41.466 request: 00:31:41.466 { 00:31:41.466 "name": "NVMe0", 00:31:41.466 "trtype": "tcp", 00:31:41.466 "traddr": "10.0.0.2", 00:31:41.466 "adrfam": "ipv4", 00:31:41.466 "trsvcid": "4420", 00:31:41.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:41.466 "hostaddr": "10.0.0.1", 00:31:41.466 "prchk_reftag": false, 00:31:41.466 "prchk_guard": false, 00:31:41.466 "hdgst": false, 00:31:41.466 "ddgst": false, 00:31:41.466 "multipath": "disable", 00:31:41.466 "allow_unrecognized_csi": false, 00:31:41.466 "method": "bdev_nvme_attach_controller", 00:31:41.466 "req_id": 1 00:31:41.466 } 00:31:41.466 Got JSON-RPC error response 00:31:41.466 response: 00:31:41.466 { 00:31:41.466 "code": -114, 00:31:41.466 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:31:41.466 } 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.466 14:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:41.466 request: 00:31:41.466 { 00:31:41.466 "name": "NVMe0", 00:31:41.466 "trtype": "tcp", 00:31:41.466 "traddr": "10.0.0.2", 00:31:41.466 "adrfam": "ipv4", 00:31:41.466 "trsvcid": "4420", 00:31:41.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:41.466 "hostaddr": "10.0.0.1", 00:31:41.466 "prchk_reftag": false, 00:31:41.466 "prchk_guard": false, 00:31:41.466 "hdgst": false, 00:31:41.466 "ddgst": false, 00:31:41.466 "multipath": "failover", 00:31:41.466 "allow_unrecognized_csi": false, 00:31:41.466 "method": "bdev_nvme_attach_controller", 00:31:41.466 "req_id": 1 00:31:41.466 } 00:31:41.466 Got JSON-RPC error response 00:31:41.466 response: 00:31:41.466 { 00:31:41.466 "code": -114, 00:31:41.466 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:41.466 } 00:31:41.466 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:41.466 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:31:41.466 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:41.466 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:41.466 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:41.466 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:41.467 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.467 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:41.467 NVMe0n1 00:31:41.467 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.467 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:41.467 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.467 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:41.467 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.467 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:31:41.467 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.467 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:41.728 00:31:41.728 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.728 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:41.728 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:31:41.728 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.728 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:41.728 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.728 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:31:41.728 14:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:42.669 { 00:31:42.669 "results": [ 00:31:42.669 { 00:31:42.669 "job": "NVMe0n1", 00:31:42.669 "core_mask": "0x1", 00:31:42.669 "workload": "write", 00:31:42.669 "status": "finished", 00:31:42.669 "queue_depth": 128, 00:31:42.669 "io_size": 4096, 00:31:42.669 "runtime": 1.005958, 00:31:42.669 "iops": 28622.46733959072, 00:31:42.669 "mibps": 111.80651304527625, 00:31:42.669 "io_failed": 0, 00:31:42.669 "io_timeout": 0, 00:31:42.669 "avg_latency_us": 4461.3224455042755, 00:31:42.669 "min_latency_us": 2107.530905446041, 00:31:42.669 "max_latency_us": 13466.301369863013 00:31:42.669 } 00:31:42.669 ], 00:31:42.669 "core_count": 1 00:31:42.669 } 00:31:42.669 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:31:42.669 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.669 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:42.669 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.669 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:31:42.669 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1863842 00:31:42.669 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1863842 ']' 00:31:42.669 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1863842 00:31:42.669 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:31:42.669 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:42.669 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1863842 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1863842' 00:31:42.930 killing process with pid 1863842 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1863842 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1863842 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:31:42.930 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:42.930 [2024-10-13 14:27:43.995208] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:31:42.930 [2024-10-13 14:27:43.995289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1863842 ] 00:31:42.930 [2024-10-13 14:27:44.130487] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:42.930 [2024-10-13 14:27:44.181970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.930 [2024-10-13 14:27:44.210387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.930 [2024-10-13 14:27:45.197721] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name 5955f37c-634f-4ab3-be86-b9de2fc84cc7 already exists 00:31:42.930 [2024-10-13 14:27:45.197752] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:5955f37c-634f-4ab3-be86-b9de2fc84cc7 alias for bdev NVMe1n1 00:31:42.930 [2024-10-13 14:27:45.197762] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:31:42.930 Running I/O for 1 seconds... 00:31:42.930 28601.00 IOPS, 111.72 MiB/s 00:31:42.930 Latency(us) 00:31:42.930 [2024-10-13T12:27:46.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.930 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:31:42.930 NVMe0n1 : 1.01 28622.47 111.81 0.00 0.00 4461.32 2107.53 13466.30 00:31:42.930 [2024-10-13T12:27:46.637Z] =================================================================================================================== 00:31:42.930 [2024-10-13T12:27:46.637Z] Total : 28622.47 111.81 0.00 0.00 4461.32 2107.53 13466.30 00:31:42.930 Received shutdown signal, test time was about 1.000000 seconds 00:31:42.930 00:31:42.930 Latency(us) 00:31:42.930 [2024-10-13T12:27:46.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.930 [2024-10-13T12:27:46.637Z] =================================================================================================================== 00:31:42.930 [2024-10-13T12:27:46.637Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:42.930 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:42.930 rmmod nvme_tcp 00:31:42.930 rmmod nvme_fabrics 00:31:42.930 rmmod nvme_keyring 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 1863498 ']' 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 1863498 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1863498 ']' 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1863498 00:31:42.930 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:31:43.191 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:43.191 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1863498 00:31:43.191 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:43.191 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:43.191 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1863498' 00:31:43.191 killing process with pid 1863498 00:31:43.191 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1863498 00:31:43.191 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1863498 00:31:43.191 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:43.191 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:43.191 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:43.191 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:31:43.191 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:31:43.191 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:43.191 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:31:43.191 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:43.191 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:43.191 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.191 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:43.191 14:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.741 14:27:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:45.741 00:31:45.741 real 0m14.010s 00:31:45.741 user 0m16.046s 00:31:45.741 sys 0m6.730s 00:31:45.741 14:27:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:45.741 14:27:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:45.741 ************************************ 00:31:45.741 END TEST nvmf_multicontroller 00:31:45.741 ************************************ 00:31:45.741 14:27:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:45.741 14:27:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:45.741 14:27:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:45.741 14:27:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.741 ************************************ 00:31:45.741 START TEST nvmf_aer 00:31:45.741 ************************************ 00:31:45.741 14:27:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:45.741 * Looking for test storage... 00:31:45.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:45.741 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:45.741 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:31:45.741 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:45.741 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:45.741 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:45.741 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:45.741 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:45.741 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:31:45.741 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:31:45.741 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:31:45.741 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:31:45.741 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:31:45.741 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:31:45.741 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:45.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.742 --rc genhtml_branch_coverage=1 00:31:45.742 --rc genhtml_function_coverage=1 00:31:45.742 --rc genhtml_legend=1 00:31:45.742 --rc geninfo_all_blocks=1 00:31:45.742 --rc geninfo_unexecuted_blocks=1 00:31:45.742 00:31:45.742 ' 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:45.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.742 --rc genhtml_branch_coverage=1 00:31:45.742 --rc genhtml_function_coverage=1 00:31:45.742 --rc genhtml_legend=1 00:31:45.742 --rc geninfo_all_blocks=1 00:31:45.742 --rc geninfo_unexecuted_blocks=1 00:31:45.742 00:31:45.742 ' 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:45.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.742 --rc genhtml_branch_coverage=1 00:31:45.742 --rc genhtml_function_coverage=1 00:31:45.742 --rc genhtml_legend=1 00:31:45.742 --rc geninfo_all_blocks=1 00:31:45.742 --rc geninfo_unexecuted_blocks=1 00:31:45.742 00:31:45.742 ' 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:45.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.742 --rc genhtml_branch_coverage=1 00:31:45.742 --rc genhtml_function_coverage=1 00:31:45.742 --rc genhtml_legend=1 00:31:45.742 --rc geninfo_all_blocks=1 00:31:45.742 --rc geninfo_unexecuted_blocks=1 00:31:45.742 00:31:45.742 ' 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:45.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:31:45.742 14:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:53.899 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:53.899 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:53.899 Found net devices under 0000:31:00.0: cvl_0_0 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:53.899 Found net devices under 0000:31:00.1: cvl_0_1 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:53.899 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:53.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:53.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:31:53.899 00:31:53.899 --- 10.0.0.2 ping statistics --- 00:31:53.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.900 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:53.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:53.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:31:53.900 00:31:53.900 --- 10.0.0.1 ping statistics --- 00:31:53.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.900 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=1868589 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 1868589 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1868589 ']' 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:53.900 14:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:53.900 [2024-10-13 14:27:56.924446] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:31:53.900 [2024-10-13 14:27:56.924508] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:53.900 [2024-10-13 14:27:57.066145] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:53.900 [2024-10-13 14:27:57.115915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:53.900 [2024-10-13 14:27:57.144471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:53.900 [2024-10-13 14:27:57.144514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:53.900 [2024-10-13 14:27:57.144522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:53.900 [2024-10-13 14:27:57.144530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:53.900 [2024-10-13 14:27:57.144536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:53.900 [2024-10-13 14:27:57.146638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:53.900 [2024-10-13 14:27:57.146791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:53.900 [2024-10-13 14:27:57.146947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.900 [2024-10-13 14:27:57.146948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:54.160 [2024-10-13 14:27:57.805871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:54.160 Malloc0 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.160 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:54.438 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.438 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:54.438 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.438 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:54.438 [2024-10-13 14:27:57.884540] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.438 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.438 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:31:54.438 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.438 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:54.438 [ 00:31:54.438 { 00:31:54.438 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:54.439 "subtype": "Discovery", 00:31:54.439 "listen_addresses": [], 00:31:54.439 "allow_any_host": true, 00:31:54.439 "hosts": [] 00:31:54.439 }, 00:31:54.439 { 00:31:54.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:54.439 "subtype": "NVMe", 00:31:54.439 "listen_addresses": [ 00:31:54.439 { 00:31:54.439 "trtype": "TCP", 00:31:54.439 "adrfam": "IPv4", 00:31:54.439 "traddr": "10.0.0.2", 00:31:54.439 "trsvcid": "4420" 00:31:54.439 } 00:31:54.439 ], 00:31:54.439 "allow_any_host": true, 00:31:54.439 "hosts": [], 00:31:54.439 "serial_number": "SPDK00000000000001", 00:31:54.439 "model_number": "SPDK bdev Controller", 00:31:54.439 "max_namespaces": 2, 00:31:54.439 "min_cntlid": 1, 00:31:54.439 "max_cntlid": 65519, 00:31:54.439 "namespaces": [ 00:31:54.439 { 00:31:54.439 "nsid": 1, 00:31:54.439 "bdev_name": "Malloc0", 00:31:54.439 "name": "Malloc0", 00:31:54.439 "nguid": "9005DF9D2E3942358E9EA61DDA8722BF", 00:31:54.439 "uuid": "9005df9d-2e39-4235-8e9e-a61dda8722bf" 00:31:54.439 } 00:31:54.439 ] 00:31:54.439 } 00:31:54.439 ] 00:31:54.439 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.439 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:31:54.439 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:31:54.439 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1868674 00:31:54.439 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:31:54.440 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:31:54.440 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:31:54.440 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:54.440 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:31:54.440 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:31:54.440 14:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:54.440 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:54.440 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:31:54.440 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:31:54.440 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:54.440 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:54.440 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:31:54.440 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:31:54.440 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:54.729 Malloc1 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:54.729 Asynchronous Event Request test 00:31:54.729 Attaching to 10.0.0.2 00:31:54.729 Attached to 10.0.0.2 00:31:54.729 Registering asynchronous event callbacks... 00:31:54.729 Starting namespace attribute notice tests for all controllers... 00:31:54.729 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:31:54.729 aer_cb - Changed Namespace 00:31:54.729 Cleaning up... 00:31:54.729 [ 00:31:54.729 { 00:31:54.729 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:54.729 "subtype": "Discovery", 00:31:54.729 "listen_addresses": [], 00:31:54.729 "allow_any_host": true, 00:31:54.729 "hosts": [] 00:31:54.729 }, 00:31:54.729 { 00:31:54.729 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:54.729 "subtype": "NVMe", 00:31:54.729 "listen_addresses": [ 00:31:54.729 { 00:31:54.729 "trtype": "TCP", 00:31:54.729 "adrfam": "IPv4", 00:31:54.729 "traddr": "10.0.0.2", 00:31:54.729 "trsvcid": "4420" 00:31:54.729 } 00:31:54.729 ], 00:31:54.729 "allow_any_host": true, 00:31:54.729 "hosts": [], 00:31:54.729 "serial_number": "SPDK00000000000001", 00:31:54.729 "model_number": "SPDK bdev Controller", 00:31:54.729 "max_namespaces": 2, 00:31:54.729 "min_cntlid": 1, 00:31:54.729 "max_cntlid": 65519, 00:31:54.729 "namespaces": [ 00:31:54.729 { 00:31:54.729 "nsid": 1, 00:31:54.729 "bdev_name": "Malloc0", 00:31:54.729 "name": "Malloc0", 00:31:54.729 "nguid": "9005DF9D2E3942358E9EA61DDA8722BF", 00:31:54.729 "uuid": "9005df9d-2e39-4235-8e9e-a61dda8722bf" 00:31:54.729 }, 00:31:54.729 { 00:31:54.729 "nsid": 2, 00:31:54.729 "bdev_name": "Malloc1", 00:31:54.729 "name": "Malloc1", 00:31:54.729 "nguid": "8352E0A515684CF4BCBEB78C8490E5D0", 00:31:54.729 "uuid": "8352e0a5-1568-4cf4-bcbe-b78c8490e5d0" 00:31:54.729 } 00:31:54.729 ] 00:31:54.729 } 00:31:54.729 ] 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1868674 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.729 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:54.730 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.730 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:54.730 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.730 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:31:54.730 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:31:54.730 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:54.730 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:31:54.730 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:54.730 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:31:54.730 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:54.730 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:54.730 rmmod nvme_tcp 00:31:54.730 rmmod nvme_fabrics 00:31:54.730 rmmod nvme_keyring 00:31:54.990 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:54.990 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:31:54.990 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:31:54.990 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 1868589 ']' 00:31:54.990 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 1868589 00:31:54.990 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1868589 ']' 00:31:54.990 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1868589 00:31:54.990 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:31:54.990 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:54.991 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1868589 00:31:54.991 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:54.991 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:54.991 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1868589' 00:31:54.991 killing process with pid 1868589 00:31:54.991 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1868589 00:31:54.991 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1868589 00:31:54.991 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:54.991 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:54.991 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:54.991 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:31:54.991 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:31:54.991 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:54.991 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:31:54.991 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:54.991 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:54.991 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.991 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:54.991 14:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.540 14:28:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:57.540 00:31:57.540 real 0m11.780s 00:31:57.540 user 0m8.424s 00:31:57.540 sys 0m6.237s 00:31:57.540 14:28:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:57.540 14:28:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:57.540 ************************************ 00:31:57.540 END TEST nvmf_aer 00:31:57.540 ************************************ 00:31:57.540 14:28:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:57.540 14:28:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:57.540 14:28:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:57.540 14:28:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.540 ************************************ 00:31:57.540 START TEST nvmf_async_init 00:31:57.540 ************************************ 00:31:57.540 14:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:57.540 * Looking for test storage... 00:31:57.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:57.540 14:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:57.540 14:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:31:57.540 14:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:57.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.540 --rc genhtml_branch_coverage=1 00:31:57.540 --rc genhtml_function_coverage=1 00:31:57.540 --rc genhtml_legend=1 00:31:57.540 --rc geninfo_all_blocks=1 00:31:57.540 --rc geninfo_unexecuted_blocks=1 00:31:57.540 00:31:57.540 ' 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:57.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.540 --rc genhtml_branch_coverage=1 00:31:57.540 --rc genhtml_function_coverage=1 00:31:57.540 --rc genhtml_legend=1 00:31:57.540 --rc geninfo_all_blocks=1 00:31:57.540 --rc geninfo_unexecuted_blocks=1 00:31:57.540 00:31:57.540 ' 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:57.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.540 --rc genhtml_branch_coverage=1 00:31:57.540 --rc genhtml_function_coverage=1 00:31:57.540 --rc genhtml_legend=1 00:31:57.540 --rc geninfo_all_blocks=1 00:31:57.540 --rc geninfo_unexecuted_blocks=1 00:31:57.540 00:31:57.540 ' 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:57.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.540 --rc genhtml_branch_coverage=1 00:31:57.540 --rc genhtml_function_coverage=1 00:31:57.540 --rc genhtml_legend=1 00:31:57.540 --rc geninfo_all_blocks=1 00:31:57.540 --rc geninfo_unexecuted_blocks=1 00:31:57.540 00:31:57.540 ' 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.540 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:57.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=297618e6824f47e39b39be9a61f9677d 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:31:57.541 14:28:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:05.785 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:05.785 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:05.785 Found net devices under 0000:31:00.0: cvl_0_0 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:05.785 Found net devices under 0000:31:00.1: cvl_0_1 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:05.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:05.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:32:05.785 00:32:05.785 --- 10.0.0.2 ping statistics --- 00:32:05.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.785 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:05.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:05.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:32:05.785 00:32:05.785 --- 10.0.0.1 ping statistics --- 00:32:05.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.785 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:05.785 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:05.786 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:05.786 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:05.786 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:05.786 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:32:05.786 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:05.786 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:05.786 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:05.786 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=1873041 00:32:05.786 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 1873041 00:32:05.786 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:32:05.786 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1873041 ']' 00:32:05.786 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.786 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:05.786 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.786 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:05.786 14:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:05.786 [2024-10-13 14:28:08.898061] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:32:05.786 [2024-10-13 14:28:08.898134] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:05.786 [2024-10-13 14:28:09.039646] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:05.786 [2024-10-13 14:28:09.089294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.786 [2024-10-13 14:28:09.116005] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:05.786 [2024-10-13 14:28:09.116050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:05.786 [2024-10-13 14:28:09.116058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:05.786 [2024-10-13 14:28:09.116081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:05.786 [2024-10-13 14:28:09.116088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:05.786 [2024-10-13 14:28:09.116859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.047 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:06.047 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:32:06.047 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:06.047 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:06.047 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:06.047 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.047 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:06.047 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.047 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:06.308 [2024-10-13 14:28:09.756855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:06.308 null0 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 297618e6824f47e39b39be9a61f9677d 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:06.308 [2024-10-13 14:28:09.817045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.308 14:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:06.569 nvme0n1 00:32:06.569 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.569 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:32:06.569 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.569 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:06.569 [ 00:32:06.569 { 00:32:06.569 "name": "nvme0n1", 00:32:06.569 "aliases": [ 00:32:06.569 "297618e6-824f-47e3-9b39-be9a61f9677d" 00:32:06.569 ], 00:32:06.569 "product_name": "NVMe disk", 00:32:06.569 "block_size": 512, 00:32:06.569 "num_blocks": 2097152, 00:32:06.569 "uuid": "297618e6-824f-47e3-9b39-be9a61f9677d", 00:32:06.569 "numa_id": 0, 00:32:06.569 "assigned_rate_limits": { 00:32:06.569 "rw_ios_per_sec": 0, 00:32:06.569 "rw_mbytes_per_sec": 0, 00:32:06.569 "r_mbytes_per_sec": 0, 00:32:06.569 "w_mbytes_per_sec": 0 00:32:06.569 }, 00:32:06.569 "claimed": false, 00:32:06.569 "zoned": false, 00:32:06.569 "supported_io_types": { 00:32:06.569 "read": true, 00:32:06.569 "write": true, 00:32:06.569 "unmap": false, 00:32:06.569 "flush": true, 00:32:06.569 "reset": true, 00:32:06.569 "nvme_admin": true, 00:32:06.569 "nvme_io": true, 00:32:06.569 "nvme_io_md": false, 00:32:06.569 "write_zeroes": true, 00:32:06.569 "zcopy": false, 00:32:06.569 "get_zone_info": false, 00:32:06.569 "zone_management": false, 00:32:06.569 "zone_append": false, 00:32:06.569 "compare": true, 00:32:06.569 "compare_and_write": true, 00:32:06.569 "abort": true, 00:32:06.569 "seek_hole": false, 00:32:06.569 "seek_data": false, 00:32:06.569 "copy": true, 00:32:06.569 "nvme_iov_md": false 00:32:06.569 }, 00:32:06.569 "memory_domains": [ 00:32:06.569 { 00:32:06.569 "dma_device_id": "system", 00:32:06.569 "dma_device_type": 1 00:32:06.569 } 00:32:06.569 ], 00:32:06.569 "driver_specific": { 00:32:06.569 "nvme": [ 00:32:06.569 { 00:32:06.569 "trid": { 00:32:06.569 "trtype": "TCP", 00:32:06.569 "adrfam": "IPv4", 00:32:06.569 "traddr": "10.0.0.2", 00:32:06.569 "trsvcid": "4420", 00:32:06.569 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:06.569 }, 00:32:06.569 "ctrlr_data": { 00:32:06.569 "cntlid": 1, 00:32:06.569 "vendor_id": "0x8086", 00:32:06.569 "model_number": "SPDK bdev Controller", 00:32:06.569 "serial_number": "00000000000000000000", 00:32:06.569 "firmware_revision": "25.01", 00:32:06.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:06.569 "oacs": { 00:32:06.569 "security": 0, 00:32:06.569 "format": 0, 00:32:06.569 "firmware": 0, 00:32:06.569 "ns_manage": 0 00:32:06.569 }, 00:32:06.569 "multi_ctrlr": true, 00:32:06.569 "ana_reporting": false 00:32:06.569 }, 00:32:06.569 "vs": { 00:32:06.569 "nvme_version": "1.3" 00:32:06.569 }, 00:32:06.569 "ns_data": { 00:32:06.569 "id": 1, 00:32:06.569 "can_share": true 00:32:06.569 } 00:32:06.569 } 00:32:06.569 ], 00:32:06.569 "mp_policy": "active_passive" 00:32:06.569 } 00:32:06.569 } 00:32:06.569 ] 00:32:06.569 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.569 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:32:06.569 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.569 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:06.569 [2024-10-13 14:28:10.092958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:06.569 [2024-10-13 14:28:10.093043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b87e0 (9): Bad file descriptor 00:32:06.569 [2024-10-13 14:28:10.225186] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:06.569 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.569 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:32:06.569 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.569 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:06.569 [ 00:32:06.569 { 00:32:06.569 "name": "nvme0n1", 00:32:06.569 "aliases": [ 00:32:06.569 "297618e6-824f-47e3-9b39-be9a61f9677d" 00:32:06.569 ], 00:32:06.569 "product_name": "NVMe disk", 00:32:06.569 "block_size": 512, 00:32:06.569 "num_blocks": 2097152, 00:32:06.569 "uuid": "297618e6-824f-47e3-9b39-be9a61f9677d", 00:32:06.569 "numa_id": 0, 00:32:06.569 "assigned_rate_limits": { 00:32:06.569 "rw_ios_per_sec": 0, 00:32:06.569 "rw_mbytes_per_sec": 0, 00:32:06.569 "r_mbytes_per_sec": 0, 00:32:06.569 "w_mbytes_per_sec": 0 00:32:06.569 }, 00:32:06.569 "claimed": false, 00:32:06.569 "zoned": false, 00:32:06.569 "supported_io_types": { 00:32:06.569 "read": true, 00:32:06.569 "write": true, 00:32:06.569 "unmap": false, 00:32:06.569 "flush": true, 00:32:06.569 "reset": true, 00:32:06.569 "nvme_admin": true, 00:32:06.569 "nvme_io": true, 00:32:06.569 "nvme_io_md": false, 00:32:06.569 "write_zeroes": true, 00:32:06.569 "zcopy": false, 00:32:06.569 "get_zone_info": false, 00:32:06.569 "zone_management": false, 00:32:06.569 "zone_append": false, 00:32:06.569 "compare": true, 00:32:06.569 "compare_and_write": true, 00:32:06.569 "abort": true, 00:32:06.569 "seek_hole": false, 00:32:06.569 "seek_data": false, 00:32:06.569 "copy": true, 00:32:06.569 "nvme_iov_md": false 00:32:06.569 }, 00:32:06.569 "memory_domains": [ 00:32:06.569 { 00:32:06.569 "dma_device_id": "system", 00:32:06.569 "dma_device_type": 1 00:32:06.569 } 00:32:06.569 ], 00:32:06.569 "driver_specific": { 00:32:06.569 "nvme": [ 00:32:06.569 { 00:32:06.569 "trid": { 00:32:06.569 "trtype": "TCP", 00:32:06.569 "adrfam": "IPv4", 00:32:06.569 "traddr": "10.0.0.2", 00:32:06.569 "trsvcid": "4420", 00:32:06.569 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:06.569 }, 00:32:06.569 "ctrlr_data": { 00:32:06.569 "cntlid": 2, 00:32:06.569 "vendor_id": "0x8086", 00:32:06.569 "model_number": "SPDK bdev Controller", 00:32:06.569 "serial_number": "00000000000000000000", 00:32:06.569 "firmware_revision": "25.01", 00:32:06.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:06.569 "oacs": { 00:32:06.569 "security": 0, 00:32:06.569 "format": 0, 00:32:06.569 "firmware": 0, 00:32:06.569 "ns_manage": 0 00:32:06.569 }, 00:32:06.569 "multi_ctrlr": true, 00:32:06.569 "ana_reporting": false 00:32:06.569 }, 00:32:06.569 "vs": { 00:32:06.569 "nvme_version": "1.3" 00:32:06.569 }, 00:32:06.569 "ns_data": { 00:32:06.569 "id": 1, 00:32:06.569 "can_share": true 00:32:06.569 } 00:32:06.569 } 00:32:06.569 ], 00:32:06.569 "mp_policy": "active_passive" 00:32:06.569 } 00:32:06.569 } 00:32:06.569 ] 00:32:06.569 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.569 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.569 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.569 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:06.569 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.830 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:32:06.830 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.AShwESM21y 00:32:06.830 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:32:06.830 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.AShwESM21y 00:32:06.830 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.AShwESM21y 00:32:06.830 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.830 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:06.830 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.830 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:32:06.830 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.830 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:06.830 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:06.831 [2024-10-13 14:28:10.317153] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:06.831 [2024-10-13 14:28:10.317340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:06.831 [2024-10-13 14:28:10.341174] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:06.831 nvme0n1 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:06.831 [ 00:32:06.831 { 00:32:06.831 "name": "nvme0n1", 00:32:06.831 "aliases": [ 00:32:06.831 "297618e6-824f-47e3-9b39-be9a61f9677d" 00:32:06.831 ], 00:32:06.831 "product_name": "NVMe disk", 00:32:06.831 "block_size": 512, 00:32:06.831 "num_blocks": 2097152, 00:32:06.831 "uuid": "297618e6-824f-47e3-9b39-be9a61f9677d", 00:32:06.831 "numa_id": 0, 00:32:06.831 "assigned_rate_limits": { 00:32:06.831 "rw_ios_per_sec": 0, 00:32:06.831 "rw_mbytes_per_sec": 0, 00:32:06.831 "r_mbytes_per_sec": 0, 00:32:06.831 "w_mbytes_per_sec": 0 00:32:06.831 }, 00:32:06.831 "claimed": false, 00:32:06.831 "zoned": false, 00:32:06.831 "supported_io_types": { 00:32:06.831 "read": true, 00:32:06.831 "write": true, 00:32:06.831 "unmap": false, 00:32:06.831 "flush": true, 00:32:06.831 "reset": true, 00:32:06.831 "nvme_admin": true, 00:32:06.831 "nvme_io": true, 00:32:06.831 "nvme_io_md": false, 00:32:06.831 "write_zeroes": true, 00:32:06.831 "zcopy": false, 00:32:06.831 "get_zone_info": false, 00:32:06.831 "zone_management": false, 00:32:06.831 "zone_append": false, 00:32:06.831 "compare": true, 00:32:06.831 "compare_and_write": true, 00:32:06.831 "abort": true, 00:32:06.831 "seek_hole": false, 00:32:06.831 "seek_data": false, 00:32:06.831 "copy": true, 00:32:06.831 "nvme_iov_md": false 00:32:06.831 }, 00:32:06.831 "memory_domains": [ 00:32:06.831 { 00:32:06.831 "dma_device_id": "system", 00:32:06.831 "dma_device_type": 1 00:32:06.831 } 00:32:06.831 ], 00:32:06.831 "driver_specific": { 00:32:06.831 "nvme": [ 00:32:06.831 { 00:32:06.831 "trid": { 00:32:06.831 "trtype": "TCP", 00:32:06.831 "adrfam": "IPv4", 00:32:06.831 "traddr": "10.0.0.2", 00:32:06.831 "trsvcid": "4421", 00:32:06.831 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:06.831 }, 00:32:06.831 "ctrlr_data": { 00:32:06.831 "cntlid": 3, 00:32:06.831 "vendor_id": "0x8086", 00:32:06.831 "model_number": "SPDK bdev Controller", 00:32:06.831 "serial_number": "00000000000000000000", 00:32:06.831 "firmware_revision": "25.01", 00:32:06.831 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:06.831 "oacs": { 00:32:06.831 "security": 0, 00:32:06.831 "format": 0, 00:32:06.831 "firmware": 0, 00:32:06.831 "ns_manage": 0 00:32:06.831 }, 00:32:06.831 "multi_ctrlr": true, 00:32:06.831 "ana_reporting": false 00:32:06.831 }, 00:32:06.831 "vs": { 00:32:06.831 "nvme_version": "1.3" 00:32:06.831 }, 00:32:06.831 "ns_data": { 00:32:06.831 "id": 1, 00:32:06.831 "can_share": true 00:32:06.831 } 00:32:06.831 } 00:32:06.831 ], 00:32:06.831 "mp_policy": "active_passive" 00:32:06.831 } 00:32:06.831 } 00:32:06.831 ] 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.AShwESM21y 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:06.831 rmmod nvme_tcp 00:32:06.831 rmmod nvme_fabrics 00:32:06.831 rmmod nvme_keyring 00:32:06.831 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 1873041 ']' 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 1873041 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1873041 ']' 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1873041 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1873041 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1873041' 00:32:07.092 killing process with pid 1873041 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1873041 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1873041 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:07.092 14:28:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.637 14:28:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:09.637 00:32:09.637 real 0m11.981s 00:32:09.637 user 0m4.109s 00:32:09.637 sys 0m6.331s 00:32:09.637 14:28:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:09.637 14:28:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:09.637 ************************************ 00:32:09.637 END TEST nvmf_async_init 00:32:09.637 ************************************ 00:32:09.637 14:28:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:32:09.637 14:28:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:09.637 14:28:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:09.637 14:28:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.637 ************************************ 00:32:09.637 START TEST dma 00:32:09.637 ************************************ 00:32:09.637 14:28:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:32:09.637 * Looking for test storage... 00:32:09.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:09.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.637 --rc genhtml_branch_coverage=1 00:32:09.637 --rc genhtml_function_coverage=1 00:32:09.637 --rc genhtml_legend=1 00:32:09.637 --rc geninfo_all_blocks=1 00:32:09.637 --rc geninfo_unexecuted_blocks=1 00:32:09.637 00:32:09.637 ' 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:09.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.637 --rc genhtml_branch_coverage=1 00:32:09.637 --rc genhtml_function_coverage=1 00:32:09.637 --rc genhtml_legend=1 00:32:09.637 --rc geninfo_all_blocks=1 00:32:09.637 --rc geninfo_unexecuted_blocks=1 00:32:09.637 00:32:09.637 ' 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:09.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.637 --rc genhtml_branch_coverage=1 00:32:09.637 --rc genhtml_function_coverage=1 00:32:09.637 --rc genhtml_legend=1 00:32:09.637 --rc geninfo_all_blocks=1 00:32:09.637 --rc geninfo_unexecuted_blocks=1 00:32:09.637 00:32:09.637 ' 00:32:09.637 14:28:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:09.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.637 --rc genhtml_branch_coverage=1 00:32:09.637 --rc genhtml_function_coverage=1 00:32:09.637 --rc genhtml_legend=1 00:32:09.637 --rc geninfo_all_blocks=1 00:32:09.638 --rc geninfo_unexecuted_blocks=1 00:32:09.638 00:32:09.638 ' 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:09.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:32:09.638 00:32:09.638 real 0m0.241s 00:32:09.638 user 0m0.136s 00:32:09.638 sys 0m0.120s 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:32:09.638 ************************************ 00:32:09.638 END TEST dma 00:32:09.638 ************************************ 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.638 ************************************ 00:32:09.638 START TEST nvmf_identify 00:32:09.638 ************************************ 00:32:09.638 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:32:09.638 * Looking for test storage... 00:32:09.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:09.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.900 --rc genhtml_branch_coverage=1 00:32:09.900 --rc genhtml_function_coverage=1 00:32:09.900 --rc genhtml_legend=1 00:32:09.900 --rc geninfo_all_blocks=1 00:32:09.900 --rc geninfo_unexecuted_blocks=1 00:32:09.900 00:32:09.900 ' 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:09.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.900 --rc genhtml_branch_coverage=1 00:32:09.900 --rc genhtml_function_coverage=1 00:32:09.900 --rc genhtml_legend=1 00:32:09.900 --rc geninfo_all_blocks=1 00:32:09.900 --rc geninfo_unexecuted_blocks=1 00:32:09.900 00:32:09.900 ' 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:09.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.900 --rc genhtml_branch_coverage=1 00:32:09.900 --rc genhtml_function_coverage=1 00:32:09.900 --rc genhtml_legend=1 00:32:09.900 --rc geninfo_all_blocks=1 00:32:09.900 --rc geninfo_unexecuted_blocks=1 00:32:09.900 00:32:09.900 ' 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:09.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.900 --rc genhtml_branch_coverage=1 00:32:09.900 --rc genhtml_function_coverage=1 00:32:09.900 --rc genhtml_legend=1 00:32:09.900 --rc geninfo_all_blocks=1 00:32:09.900 --rc geninfo_unexecuted_blocks=1 00:32:09.900 00:32:09.900 ' 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:32:09.900 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:09.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:32:09.901 14:28:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:18.050 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:18.050 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:18.050 Found net devices under 0000:31:00.0: cvl_0_0 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.050 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:18.051 Found net devices under 0000:31:00.1: cvl_0_1 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:18.051 14:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:18.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:18.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:32:18.051 00:32:18.051 --- 10.0.0.2 ping statistics --- 00:32:18.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.051 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:18.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:18.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:32:18.051 00:32:18.051 --- 10.0.0.1 ping statistics --- 00:32:18.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.051 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1877818 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1877818 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1877818 ']' 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:18.051 14:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:18.051 [2024-10-13 14:28:21.317811] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:32:18.051 [2024-10-13 14:28:21.317879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:18.051 [2024-10-13 14:28:21.462052] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:18.051 [2024-10-13 14:28:21.511363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:18.051 [2024-10-13 14:28:21.541002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:18.051 [2024-10-13 14:28:21.541046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:18.051 [2024-10-13 14:28:21.541054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:18.051 [2024-10-13 14:28:21.541061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:18.051 [2024-10-13 14:28:21.541079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:18.051 [2024-10-13 14:28:21.543030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.051 [2024-10-13 14:28:21.543189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:18.051 [2024-10-13 14:28:21.543482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:18.051 [2024-10-13 14:28:21.543484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:18.625 [2024-10-13 14:28:22.150395] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:18.625 Malloc0 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:18.625 [2024-10-13 14:28:22.279432] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.625 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:18.625 [ 00:32:18.625 { 00:32:18.625 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:18.625 "subtype": "Discovery", 00:32:18.625 "listen_addresses": [ 00:32:18.625 { 00:32:18.625 "trtype": "TCP", 00:32:18.625 "adrfam": "IPv4", 00:32:18.625 "traddr": "10.0.0.2", 00:32:18.625 "trsvcid": "4420" 00:32:18.625 } 00:32:18.625 ], 00:32:18.625 "allow_any_host": true, 00:32:18.625 "hosts": [] 00:32:18.625 }, 00:32:18.625 { 00:32:18.625 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:18.625 "subtype": "NVMe", 00:32:18.625 "listen_addresses": [ 00:32:18.625 { 00:32:18.625 "trtype": "TCP", 00:32:18.625 "adrfam": "IPv4", 00:32:18.625 "traddr": "10.0.0.2", 00:32:18.625 "trsvcid": "4420" 00:32:18.626 } 00:32:18.626 ], 00:32:18.626 "allow_any_host": true, 00:32:18.626 "hosts": [], 00:32:18.626 "serial_number": "SPDK00000000000001", 00:32:18.626 "model_number": "SPDK bdev Controller", 00:32:18.626 "max_namespaces": 32, 00:32:18.626 "min_cntlid": 1, 00:32:18.626 "max_cntlid": 65519, 00:32:18.626 "namespaces": [ 00:32:18.626 { 00:32:18.626 "nsid": 1, 00:32:18.626 "bdev_name": "Malloc0", 00:32:18.626 "name": "Malloc0", 00:32:18.626 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:32:18.626 "eui64": "ABCDEF0123456789", 00:32:18.626 "uuid": "18b45bcb-ad36-4095-8bcc-752a8992a3c0" 00:32:18.626 } 00:32:18.626 ] 00:32:18.626 } 00:32:18.626 ] 00:32:18.626 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.626 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:32:18.889 [2024-10-13 14:28:22.343916] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:32:18.889 [2024-10-13 14:28:22.343966] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1878133 ] 00:32:18.889 [2024-10-13 14:28:22.459759] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:18.889 [2024-10-13 14:28:22.483258] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:32:18.889 [2024-10-13 14:28:22.483330] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:32:18.889 [2024-10-13 14:28:22.483336] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:32:18.889 [2024-10-13 14:28:22.483354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:32:18.889 [2024-10-13 14:28:22.483365] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:32:18.889 [2024-10-13 14:28:22.484305] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:32:18.889 [2024-10-13 14:28:22.484350] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ec8120 0 00:32:18.889 [2024-10-13 14:28:22.498084] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:32:18.889 [2024-10-13 14:28:22.498101] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:32:18.889 [2024-10-13 14:28:22.498106] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:32:18.889 [2024-10-13 14:28:22.498109] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:32:18.889 [2024-10-13 14:28:22.498148] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.498154] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.498158] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec8120) 00:32:18.889 [2024-10-13 14:28:22.498175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:18.889 [2024-10-13 14:28:22.498199] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f34c40, cid 0, qid 0 00:32:18.889 [2024-10-13 14:28:22.506078] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:18.889 [2024-10-13 14:28:22.506090] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:18.889 [2024-10-13 14:28:22.506093] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.506099] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f34c40) on tqpair=0x1ec8120 00:32:18.889 [2024-10-13 14:28:22.506112] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:32:18.889 [2024-10-13 14:28:22.506120] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:32:18.889 [2024-10-13 14:28:22.506126] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:32:18.889 [2024-10-13 14:28:22.506147] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.506152] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.506155] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec8120) 00:32:18.889 [2024-10-13 14:28:22.506165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.889 [2024-10-13 14:28:22.506183] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f34c40, cid 0, qid 0 00:32:18.889 [2024-10-13 14:28:22.506414] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:18.889 [2024-10-13 14:28:22.506421] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:18.889 [2024-10-13 14:28:22.506425] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.506428] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f34c40) on tqpair=0x1ec8120 00:32:18.889 [2024-10-13 14:28:22.506434] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:32:18.889 [2024-10-13 14:28:22.506443] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:32:18.889 [2024-10-13 14:28:22.506450] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.506454] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.506458] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec8120) 00:32:18.889 [2024-10-13 14:28:22.506465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.889 [2024-10-13 14:28:22.506476] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f34c40, cid 0, qid 0 00:32:18.889 [2024-10-13 14:28:22.506698] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:18.889 [2024-10-13 14:28:22.506704] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:18.889 [2024-10-13 14:28:22.506708] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.506712] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f34c40) on tqpair=0x1ec8120 00:32:18.889 [2024-10-13 14:28:22.506717] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:32:18.889 [2024-10-13 14:28:22.506725] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:32:18.889 [2024-10-13 14:28:22.506732] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.506736] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.506740] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec8120) 00:32:18.889 [2024-10-13 14:28:22.506747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.889 [2024-10-13 14:28:22.506757] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f34c40, cid 0, qid 0 00:32:18.889 [2024-10-13 14:28:22.506968] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:18.889 [2024-10-13 14:28:22.506975] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:18.889 [2024-10-13 14:28:22.506978] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.506982] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f34c40) on tqpair=0x1ec8120 00:32:18.889 [2024-10-13 14:28:22.506987] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:32:18.889 [2024-10-13 14:28:22.506997] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.507001] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.507008] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec8120) 00:32:18.889 [2024-10-13 14:28:22.507015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.889 [2024-10-13 14:28:22.507025] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f34c40, cid 0, qid 0 00:32:18.889 [2024-10-13 14:28:22.507227] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:18.889 [2024-10-13 14:28:22.507234] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:18.889 [2024-10-13 14:28:22.507237] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.507241] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f34c40) on tqpair=0x1ec8120 00:32:18.889 [2024-10-13 14:28:22.507246] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:32:18.889 [2024-10-13 14:28:22.507251] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:32:18.889 [2024-10-13 14:28:22.507258] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:32:18.889 [2024-10-13 14:28:22.507364] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:32:18.889 [2024-10-13 14:28:22.507369] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:32:18.889 [2024-10-13 14:28:22.507378] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.507382] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.507386] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec8120) 00:32:18.889 [2024-10-13 14:28:22.507392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.889 [2024-10-13 14:28:22.507403] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f34c40, cid 0, qid 0 00:32:18.889 [2024-10-13 14:28:22.507613] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:18.889 [2024-10-13 14:28:22.507620] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:18.889 [2024-10-13 14:28:22.507623] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.507627] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f34c40) on tqpair=0x1ec8120 00:32:18.889 [2024-10-13 14:28:22.507632] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:32:18.889 [2024-10-13 14:28:22.507641] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.507645] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.507648] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec8120) 00:32:18.889 [2024-10-13 14:28:22.507655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.889 [2024-10-13 14:28:22.507665] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f34c40, cid 0, qid 0 00:32:18.889 [2024-10-13 14:28:22.507857] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:18.889 [2024-10-13 14:28:22.507863] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:18.889 [2024-10-13 14:28:22.507867] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.507870] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f34c40) on tqpair=0x1ec8120 00:32:18.889 [2024-10-13 14:28:22.507875] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:32:18.889 [2024-10-13 14:28:22.507883] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:32:18.889 [2024-10-13 14:28:22.507891] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:32:18.889 [2024-10-13 14:28:22.507907] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:32:18.889 [2024-10-13 14:28:22.507917] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:18.889 [2024-10-13 14:28:22.507920] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec8120) 00:32:18.890 [2024-10-13 14:28:22.507927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.890 [2024-10-13 14:28:22.507938] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f34c40, cid 0, qid 0 00:32:18.890 [2024-10-13 14:28:22.508190] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:18.890 [2024-10-13 14:28:22.508196] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:18.890 [2024-10-13 14:28:22.508200] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.508204] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec8120): datao=0, datal=4096, cccid=0 00:32:18.890 [2024-10-13 14:28:22.508209] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f34c40) on tqpair(0x1ec8120): expected_datao=0, payload_size=4096 00:32:18.890 [2024-10-13 14:28:22.508214] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.508243] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.508247] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.508420] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:18.890 [2024-10-13 14:28:22.508426] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:18.890 [2024-10-13 14:28:22.508430] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.508434] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f34c40) on tqpair=0x1ec8120 00:32:18.890 [2024-10-13 14:28:22.508442] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:32:18.890 [2024-10-13 14:28:22.508447] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:32:18.890 [2024-10-13 14:28:22.508452] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:32:18.890 [2024-10-13 14:28:22.508457] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:32:18.890 [2024-10-13 14:28:22.508462] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:32:18.890 [2024-10-13 14:28:22.508466] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:32:18.890 [2024-10-13 14:28:22.508475] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:32:18.890 [2024-10-13 14:28:22.508482] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.508486] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.508489] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec8120) 00:32:18.890 [2024-10-13 14:28:22.508496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:18.890 [2024-10-13 14:28:22.508507] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f34c40, cid 0, qid 0 00:32:18.890 [2024-10-13 14:28:22.508716] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:18.890 [2024-10-13 14:28:22.508725] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:18.890 [2024-10-13 14:28:22.508729] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.508733] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f34c40) on tqpair=0x1ec8120 00:32:18.890 [2024-10-13 14:28:22.508741] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.508744] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.508748] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec8120) 00:32:18.890 [2024-10-13 14:28:22.508754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.890 [2024-10-13 14:28:22.508761] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.508764] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.508768] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ec8120) 00:32:18.890 [2024-10-13 14:28:22.508774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.890 [2024-10-13 14:28:22.508781] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.508784] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.508788] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ec8120) 00:32:18.890 [2024-10-13 14:28:22.508794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.890 [2024-10-13 14:28:22.508800] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.508804] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.508807] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec8120) 00:32:18.890 [2024-10-13 14:28:22.508813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.890 [2024-10-13 14:28:22.508818] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:32:18.890 [2024-10-13 14:28:22.508830] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:32:18.890 [2024-10-13 14:28:22.508836] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.508840] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec8120) 00:32:18.890 [2024-10-13 14:28:22.508847] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.890 [2024-10-13 14:28:22.508859] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f34c40, cid 0, qid 0 00:32:18.890 [2024-10-13 14:28:22.508864] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f34dc0, cid 1, qid 0 00:32:18.890 [2024-10-13 14:28:22.508869] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f34f40, cid 2, qid 0 00:32:18.890 [2024-10-13 14:28:22.508874] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f350c0, cid 3, qid 0 00:32:18.890 [2024-10-13 14:28:22.508879] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f35240, cid 4, qid 0 00:32:18.890 [2024-10-13 14:28:22.509128] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:18.890 [2024-10-13 14:28:22.509135] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:18.890 [2024-10-13 14:28:22.509139] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.509143] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f35240) on tqpair=0x1ec8120 00:32:18.890 [2024-10-13 14:28:22.509148] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:32:18.890 [2024-10-13 14:28:22.509156] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:32:18.890 [2024-10-13 14:28:22.509167] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.509171] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec8120) 00:32:18.890 [2024-10-13 14:28:22.509177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.890 [2024-10-13 14:28:22.509188] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f35240, cid 4, qid 0 00:32:18.890 [2024-10-13 14:28:22.509441] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:18.890 [2024-10-13 14:28:22.509448] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:18.890 [2024-10-13 14:28:22.509451] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.509455] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec8120): datao=0, datal=4096, cccid=4 00:32:18.890 [2024-10-13 14:28:22.509459] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f35240) on tqpair(0x1ec8120): expected_datao=0, payload_size=4096 00:32:18.890 [2024-10-13 14:28:22.509464] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.509475] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.509478] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.553073] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:18.890 [2024-10-13 14:28:22.553085] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:18.890 [2024-10-13 14:28:22.553088] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.553092] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f35240) on tqpair=0x1ec8120 00:32:18.890 [2024-10-13 14:28:22.553108] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:32:18.890 [2024-10-13 14:28:22.553144] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.553148] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec8120) 00:32:18.890 [2024-10-13 14:28:22.553156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.890 [2024-10-13 14:28:22.553164] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.553168] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.553172] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ec8120) 00:32:18.890 [2024-10-13 14:28:22.553178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.890 [2024-10-13 14:28:22.553193] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f35240, cid 4, qid 0 00:32:18.890 [2024-10-13 14:28:22.553198] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f353c0, cid 5, qid 0 00:32:18.890 [2024-10-13 14:28:22.553487] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:18.890 [2024-10-13 14:28:22.553493] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:18.890 [2024-10-13 14:28:22.553497] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.553500] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec8120): datao=0, datal=1024, cccid=4 00:32:18.890 [2024-10-13 14:28:22.553505] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f35240) on tqpair(0x1ec8120): expected_datao=0, payload_size=1024 00:32:18.890 [2024-10-13 14:28:22.553509] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.553516] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.553520] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.553529] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:18.890 [2024-10-13 14:28:22.553535] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:18.890 [2024-10-13 14:28:22.553538] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:18.890 [2024-10-13 14:28:22.553542] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f353c0) on tqpair=0x1ec8120 00:32:19.154 [2024-10-13 14:28:22.594278] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.154 [2024-10-13 14:28:22.594291] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.154 [2024-10-13 14:28:22.594294] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.154 [2024-10-13 14:28:22.594298] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f35240) on tqpair=0x1ec8120 00:32:19.154 [2024-10-13 14:28:22.594311] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.154 [2024-10-13 14:28:22.594315] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec8120) 00:32:19.154 [2024-10-13 14:28:22.594322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.154 [2024-10-13 14:28:22.594339] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f35240, cid 4, qid 0 00:32:19.154 [2024-10-13 14:28:22.594590] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:19.154 [2024-10-13 14:28:22.594597] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:19.154 [2024-10-13 14:28:22.594600] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:19.154 [2024-10-13 14:28:22.594604] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec8120): datao=0, datal=3072, cccid=4 00:32:19.154 [2024-10-13 14:28:22.594609] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f35240) on tqpair(0x1ec8120): expected_datao=0, payload_size=3072 00:32:19.154 [2024-10-13 14:28:22.594613] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.154 [2024-10-13 14:28:22.594632] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:19.154 [2024-10-13 14:28:22.594636] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:19.154 [2024-10-13 14:28:22.594813] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.154 [2024-10-13 14:28:22.594820] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.154 [2024-10-13 14:28:22.594823] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.154 [2024-10-13 14:28:22.594827] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f35240) on tqpair=0x1ec8120 00:32:19.154 [2024-10-13 14:28:22.594835] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.154 [2024-10-13 14:28:22.594839] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec8120) 00:32:19.154 [2024-10-13 14:28:22.594846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.154 [2024-10-13 14:28:22.594859] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f35240, cid 4, qid 0 00:32:19.154 [2024-10-13 14:28:22.595097] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:19.154 [2024-10-13 14:28:22.595104] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:19.154 [2024-10-13 14:28:22.595107] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:19.154 [2024-10-13 14:28:22.595111] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec8120): datao=0, datal=8, cccid=4 00:32:19.154 [2024-10-13 14:28:22.595115] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f35240) on tqpair(0x1ec8120): expected_datao=0, payload_size=8 00:32:19.154 [2024-10-13 14:28:22.595120] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.154 [2024-10-13 14:28:22.595126] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:19.154 [2024-10-13 14:28:22.595130] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:19.154 [2024-10-13 14:28:22.636235] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.154 [2024-10-13 14:28:22.636249] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.154 [2024-10-13 14:28:22.636253] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.154 [2024-10-13 14:28:22.636257] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f35240) on tqpair=0x1ec8120 00:32:19.154 ===================================================== 00:32:19.154 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:19.154 ===================================================== 00:32:19.154 Controller Capabilities/Features 00:32:19.154 ================================ 00:32:19.154 Vendor ID: 0000 00:32:19.154 Subsystem Vendor ID: 0000 00:32:19.154 Serial Number: .................... 00:32:19.154 Model Number: ........................................ 00:32:19.154 Firmware Version: 25.01 00:32:19.154 Recommended Arb Burst: 0 00:32:19.154 IEEE OUI Identifier: 00 00 00 00:32:19.154 Multi-path I/O 00:32:19.154 May have multiple subsystem ports: No 00:32:19.154 May have multiple controllers: No 00:32:19.154 Associated with SR-IOV VF: No 00:32:19.154 Max Data Transfer Size: 131072 00:32:19.154 Max Number of Namespaces: 0 00:32:19.154 Max Number of I/O Queues: 1024 00:32:19.154 NVMe Specification Version (VS): 1.3 00:32:19.154 NVMe Specification Version (Identify): 1.3 00:32:19.154 Maximum Queue Entries: 128 00:32:19.154 Contiguous Queues Required: Yes 00:32:19.154 Arbitration Mechanisms Supported 00:32:19.154 Weighted Round Robin: Not Supported 00:32:19.154 Vendor Specific: Not Supported 00:32:19.154 Reset Timeout: 15000 ms 00:32:19.154 Doorbell Stride: 4 bytes 00:32:19.154 NVM Subsystem Reset: Not Supported 00:32:19.154 Command Sets Supported 00:32:19.154 NVM Command Set: Supported 00:32:19.154 Boot Partition: Not Supported 00:32:19.154 Memory Page Size Minimum: 4096 bytes 00:32:19.154 Memory Page Size Maximum: 4096 bytes 00:32:19.154 Persistent Memory Region: Not Supported 00:32:19.154 Optional Asynchronous Events Supported 00:32:19.154 Namespace Attribute Notices: Not Supported 00:32:19.154 Firmware Activation Notices: Not Supported 00:32:19.154 ANA Change Notices: Not Supported 00:32:19.154 PLE Aggregate Log Change Notices: Not Supported 00:32:19.154 LBA Status Info Alert Notices: Not Supported 00:32:19.154 EGE Aggregate Log Change Notices: Not Supported 00:32:19.154 Normal NVM Subsystem Shutdown event: Not Supported 00:32:19.154 Zone Descriptor Change Notices: Not Supported 00:32:19.154 Discovery Log Change Notices: Supported 00:32:19.154 Controller Attributes 00:32:19.154 128-bit Host Identifier: Not Supported 00:32:19.154 Non-Operational Permissive Mode: Not Supported 00:32:19.154 NVM Sets: Not Supported 00:32:19.154 Read Recovery Levels: Not Supported 00:32:19.154 Endurance Groups: Not Supported 00:32:19.154 Predictable Latency Mode: Not Supported 00:32:19.154 Traffic Based Keep ALive: Not Supported 00:32:19.154 Namespace Granularity: Not Supported 00:32:19.154 SQ Associations: Not Supported 00:32:19.154 UUID List: Not Supported 00:32:19.154 Multi-Domain Subsystem: Not Supported 00:32:19.154 Fixed Capacity Management: Not Supported 00:32:19.154 Variable Capacity Management: Not Supported 00:32:19.154 Delete Endurance Group: Not Supported 00:32:19.154 Delete NVM Set: Not Supported 00:32:19.154 Extended LBA Formats Supported: Not Supported 00:32:19.154 Flexible Data Placement Supported: Not Supported 00:32:19.154 00:32:19.154 Controller Memory Buffer Support 00:32:19.154 ================================ 00:32:19.154 Supported: No 00:32:19.154 00:32:19.154 Persistent Memory Region Support 00:32:19.154 ================================ 00:32:19.154 Supported: No 00:32:19.154 00:32:19.154 Admin Command Set Attributes 00:32:19.154 ============================ 00:32:19.154 Security Send/Receive: Not Supported 00:32:19.154 Format NVM: Not Supported 00:32:19.154 Firmware Activate/Download: Not Supported 00:32:19.154 Namespace Management: Not Supported 00:32:19.154 Device Self-Test: Not Supported 00:32:19.154 Directives: Not Supported 00:32:19.154 NVMe-MI: Not Supported 00:32:19.154 Virtualization Management: Not Supported 00:32:19.154 Doorbell Buffer Config: Not Supported 00:32:19.154 Get LBA Status Capability: Not Supported 00:32:19.154 Command & Feature Lockdown Capability: Not Supported 00:32:19.154 Abort Command Limit: 1 00:32:19.154 Async Event Request Limit: 4 00:32:19.154 Number of Firmware Slots: N/A 00:32:19.154 Firmware Slot 1 Read-Only: N/A 00:32:19.154 Firmware Activation Without Reset: N/A 00:32:19.154 Multiple Update Detection Support: N/A 00:32:19.154 Firmware Update Granularity: No Information Provided 00:32:19.154 Per-Namespace SMART Log: No 00:32:19.154 Asymmetric Namespace Access Log Page: Not Supported 00:32:19.154 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:19.154 Command Effects Log Page: Not Supported 00:32:19.154 Get Log Page Extended Data: Supported 00:32:19.154 Telemetry Log Pages: Not Supported 00:32:19.154 Persistent Event Log Pages: Not Supported 00:32:19.154 Supported Log Pages Log Page: May Support 00:32:19.154 Commands Supported & Effects Log Page: Not Supported 00:32:19.154 Feature Identifiers & Effects Log Page:May Support 00:32:19.154 NVMe-MI Commands & Effects Log Page: May Support 00:32:19.154 Data Area 4 for Telemetry Log: Not Supported 00:32:19.154 Error Log Page Entries Supported: 128 00:32:19.154 Keep Alive: Not Supported 00:32:19.154 00:32:19.154 NVM Command Set Attributes 00:32:19.154 ========================== 00:32:19.154 Submission Queue Entry Size 00:32:19.154 Max: 1 00:32:19.154 Min: 1 00:32:19.154 Completion Queue Entry Size 00:32:19.154 Max: 1 00:32:19.154 Min: 1 00:32:19.154 Number of Namespaces: 0 00:32:19.154 Compare Command: Not Supported 00:32:19.154 Write Uncorrectable Command: Not Supported 00:32:19.154 Dataset Management Command: Not Supported 00:32:19.154 Write Zeroes Command: Not Supported 00:32:19.154 Set Features Save Field: Not Supported 00:32:19.154 Reservations: Not Supported 00:32:19.154 Timestamp: Not Supported 00:32:19.154 Copy: Not Supported 00:32:19.154 Volatile Write Cache: Not Present 00:32:19.154 Atomic Write Unit (Normal): 1 00:32:19.154 Atomic Write Unit (PFail): 1 00:32:19.154 Atomic Compare & Write Unit: 1 00:32:19.155 Fused Compare & Write: Supported 00:32:19.155 Scatter-Gather List 00:32:19.155 SGL Command Set: Supported 00:32:19.155 SGL Keyed: Supported 00:32:19.155 SGL Bit Bucket Descriptor: Not Supported 00:32:19.155 SGL Metadata Pointer: Not Supported 00:32:19.155 Oversized SGL: Not Supported 00:32:19.155 SGL Metadata Address: Not Supported 00:32:19.155 SGL Offset: Supported 00:32:19.155 Transport SGL Data Block: Not Supported 00:32:19.155 Replay Protected Memory Block: Not Supported 00:32:19.155 00:32:19.155 Firmware Slot Information 00:32:19.155 ========================= 00:32:19.155 Active slot: 0 00:32:19.155 00:32:19.155 00:32:19.155 Error Log 00:32:19.155 ========= 00:32:19.155 00:32:19.155 Active Namespaces 00:32:19.155 ================= 00:32:19.155 Discovery Log Page 00:32:19.155 ================== 00:32:19.155 Generation Counter: 2 00:32:19.155 Number of Records: 2 00:32:19.155 Record Format: 0 00:32:19.155 00:32:19.155 Discovery Log Entry 0 00:32:19.155 ---------------------- 00:32:19.155 Transport Type: 3 (TCP) 00:32:19.155 Address Family: 1 (IPv4) 00:32:19.155 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:19.155 Entry Flags: 00:32:19.155 Duplicate Returned Information: 1 00:32:19.155 Explicit Persistent Connection Support for Discovery: 1 00:32:19.155 Transport Requirements: 00:32:19.155 Secure Channel: Not Required 00:32:19.155 Port ID: 0 (0x0000) 00:32:19.155 Controller ID: 65535 (0xffff) 00:32:19.155 Admin Max SQ Size: 128 00:32:19.155 Transport Service Identifier: 4420 00:32:19.155 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:19.155 Transport Address: 10.0.0.2 00:32:19.155 Discovery Log Entry 1 00:32:19.155 ---------------------- 00:32:19.155 Transport Type: 3 (TCP) 00:32:19.155 Address Family: 1 (IPv4) 00:32:19.155 Subsystem Type: 2 (NVM Subsystem) 00:32:19.155 Entry Flags: 00:32:19.155 Duplicate Returned Information: 0 00:32:19.155 Explicit Persistent Connection Support for Discovery: 0 00:32:19.155 Transport Requirements: 00:32:19.155 Secure Channel: Not Required 00:32:19.155 Port ID: 0 (0x0000) 00:32:19.155 Controller ID: 65535 (0xffff) 00:32:19.155 Admin Max SQ Size: 128 00:32:19.155 Transport Service Identifier: 4420 00:32:19.155 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:32:19.155 Transport Address: 10.0.0.2 [2024-10-13 14:28:22.636358] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:32:19.155 [2024-10-13 14:28:22.636369] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f34c40) on tqpair=0x1ec8120 00:32:19.155 [2024-10-13 14:28:22.636376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.155 [2024-10-13 14:28:22.636382] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f34dc0) on tqpair=0x1ec8120 00:32:19.155 [2024-10-13 14:28:22.636387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.155 [2024-10-13 14:28:22.636392] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f34f40) on tqpair=0x1ec8120 00:32:19.155 [2024-10-13 14:28:22.636397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.155 [2024-10-13 14:28:22.636402] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f350c0) on tqpair=0x1ec8120 00:32:19.155 [2024-10-13 14:28:22.636406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.155 [2024-10-13 14:28:22.636416] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.155 [2024-10-13 14:28:22.636420] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.155 [2024-10-13 14:28:22.636423] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec8120) 00:32:19.155 [2024-10-13 14:28:22.636431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.155 [2024-10-13 14:28:22.636446] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f350c0, cid 3, qid 0 00:32:19.155 [2024-10-13 14:28:22.636567] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.155 [2024-10-13 14:28:22.636574] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.155 [2024-10-13 14:28:22.636577] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.155 [2024-10-13 14:28:22.636581] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f350c0) on tqpair=0x1ec8120 00:32:19.155 [2024-10-13 14:28:22.636589] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.155 [2024-10-13 14:28:22.636592] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.155 [2024-10-13 14:28:22.636596] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec8120) 00:32:19.155 [2024-10-13 14:28:22.636603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.155 [2024-10-13 14:28:22.636618] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f350c0, cid 3, qid 0 00:32:19.155 [2024-10-13 14:28:22.636817] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.155 [2024-10-13 14:28:22.636823] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.155 [2024-10-13 14:28:22.636826] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.155 [2024-10-13 14:28:22.636830] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f350c0) on tqpair=0x1ec8120 00:32:19.155 [2024-10-13 14:28:22.636835] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:32:19.155 [2024-10-13 14:28:22.636843] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:32:19.155 [2024-10-13 14:28:22.636853] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.155 [2024-10-13 14:28:22.636857] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.155 [2024-10-13 14:28:22.636866] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec8120) 00:32:19.155 [2024-10-13 14:28:22.636872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.155 [2024-10-13 14:28:22.636883] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f350c0, cid 3, qid 0 00:32:19.155 [2024-10-13 14:28:22.637059] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.155 [2024-10-13 14:28:22.641074] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.155 [2024-10-13 14:28:22.641079] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.155 [2024-10-13 14:28:22.641083] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f350c0) on tqpair=0x1ec8120 00:32:19.155 [2024-10-13 14:28:22.641094] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.155 [2024-10-13 14:28:22.641098] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.155 [2024-10-13 14:28:22.641101] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec8120) 00:32:19.155 [2024-10-13 14:28:22.641108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.155 [2024-10-13 14:28:22.641119] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f350c0, cid 3, qid 0 00:32:19.155 [2024-10-13 14:28:22.641306] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.155 [2024-10-13 14:28:22.641313] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.155 [2024-10-13 14:28:22.641316] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.155 [2024-10-13 14:28:22.641320] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f350c0) on tqpair=0x1ec8120 00:32:19.155 [2024-10-13 14:28:22.641327] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:32:19.155 00:32:19.155 14:28:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:32:19.155 [2024-10-13 14:28:22.688788] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:32:19.155 [2024-10-13 14:28:22.688852] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1878176 ] 00:32:19.155 [2024-10-13 14:28:22.801754] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:19.155 [2024-10-13 14:28:22.825185] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:32:19.155 [2024-10-13 14:28:22.825241] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:32:19.155 [2024-10-13 14:28:22.825246] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:32:19.155 [2024-10-13 14:28:22.825263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:32:19.155 [2024-10-13 14:28:22.825274] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:32:19.155 [2024-10-13 14:28:22.829361] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:32:19.155 [2024-10-13 14:28:22.829406] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8db120 0 00:32:19.155 [2024-10-13 14:28:22.837087] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:32:19.155 [2024-10-13 14:28:22.837110] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:32:19.155 [2024-10-13 14:28:22.837115] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:32:19.155 [2024-10-13 14:28:22.837118] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:32:19.155 [2024-10-13 14:28:22.837151] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.155 [2024-10-13 14:28:22.837157] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.155 [2024-10-13 14:28:22.837161] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8db120) 00:32:19.155 [2024-10-13 14:28:22.837175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:19.155 [2024-10-13 14:28:22.837197] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x947c40, cid 0, qid 0 00:32:19.155 [2024-10-13 14:28:22.845077] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.155 [2024-10-13 14:28:22.845087] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.155 [2024-10-13 14:28:22.845091] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.155 [2024-10-13 14:28:22.845096] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x947c40) on tqpair=0x8db120 00:32:19.156 [2024-10-13 14:28:22.845105] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:32:19.156 [2024-10-13 14:28:22.845112] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:32:19.156 [2024-10-13 14:28:22.845118] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:32:19.156 [2024-10-13 14:28:22.845132] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.845136] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.845140] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8db120) 00:32:19.156 [2024-10-13 14:28:22.845149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.156 [2024-10-13 14:28:22.845165] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x947c40, cid 0, qid 0 00:32:19.156 [2024-10-13 14:28:22.845398] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.156 [2024-10-13 14:28:22.845405] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.156 [2024-10-13 14:28:22.845408] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.845412] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x947c40) on tqpair=0x8db120 00:32:19.156 [2024-10-13 14:28:22.845418] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:32:19.156 [2024-10-13 14:28:22.845426] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:32:19.156 [2024-10-13 14:28:22.845432] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.845436] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.845440] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8db120) 00:32:19.156 [2024-10-13 14:28:22.845446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.156 [2024-10-13 14:28:22.845457] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x947c40, cid 0, qid 0 00:32:19.156 [2024-10-13 14:28:22.845675] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.156 [2024-10-13 14:28:22.845681] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.156 [2024-10-13 14:28:22.845684] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.845688] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x947c40) on tqpair=0x8db120 00:32:19.156 [2024-10-13 14:28:22.845694] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:32:19.156 [2024-10-13 14:28:22.845706] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:32:19.156 [2024-10-13 14:28:22.845713] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.845717] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.845721] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8db120) 00:32:19.156 [2024-10-13 14:28:22.845728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.156 [2024-10-13 14:28:22.845738] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x947c40, cid 0, qid 0 00:32:19.156 [2024-10-13 14:28:22.845942] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.156 [2024-10-13 14:28:22.845949] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.156 [2024-10-13 14:28:22.845952] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.845956] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x947c40) on tqpair=0x8db120 00:32:19.156 [2024-10-13 14:28:22.845961] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:32:19.156 [2024-10-13 14:28:22.845970] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.845974] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.845978] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8db120) 00:32:19.156 [2024-10-13 14:28:22.845985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.156 [2024-10-13 14:28:22.845995] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x947c40, cid 0, qid 0 00:32:19.156 [2024-10-13 14:28:22.846185] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.156 [2024-10-13 14:28:22.846192] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.156 [2024-10-13 14:28:22.846195] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.846199] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x947c40) on tqpair=0x8db120 00:32:19.156 [2024-10-13 14:28:22.846204] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:32:19.156 [2024-10-13 14:28:22.846209] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:32:19.156 [2024-10-13 14:28:22.846216] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:32:19.156 [2024-10-13 14:28:22.846322] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:32:19.156 [2024-10-13 14:28:22.846326] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:32:19.156 [2024-10-13 14:28:22.846334] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.846338] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.846341] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8db120) 00:32:19.156 [2024-10-13 14:28:22.846348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.156 [2024-10-13 14:28:22.846359] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x947c40, cid 0, qid 0 00:32:19.156 [2024-10-13 14:28:22.846539] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.156 [2024-10-13 14:28:22.846546] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.156 [2024-10-13 14:28:22.846549] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.846553] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x947c40) on tqpair=0x8db120 00:32:19.156 [2024-10-13 14:28:22.846560] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:32:19.156 [2024-10-13 14:28:22.846570] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.846573] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.846577] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8db120) 00:32:19.156 [2024-10-13 14:28:22.846584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.156 [2024-10-13 14:28:22.846594] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x947c40, cid 0, qid 0 00:32:19.156 [2024-10-13 14:28:22.846804] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.156 [2024-10-13 14:28:22.846810] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.156 [2024-10-13 14:28:22.846813] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.846817] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x947c40) on tqpair=0x8db120 00:32:19.156 [2024-10-13 14:28:22.846822] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:32:19.156 [2024-10-13 14:28:22.846826] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:32:19.156 [2024-10-13 14:28:22.846834] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:32:19.156 [2024-10-13 14:28:22.846848] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:32:19.156 [2024-10-13 14:28:22.846857] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.846861] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8db120) 00:32:19.156 [2024-10-13 14:28:22.846868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.156 [2024-10-13 14:28:22.846878] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x947c40, cid 0, qid 0 00:32:19.156 [2024-10-13 14:28:22.847129] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:19.156 [2024-10-13 14:28:22.847136] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:19.156 [2024-10-13 14:28:22.847139] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.847144] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8db120): datao=0, datal=4096, cccid=0 00:32:19.156 [2024-10-13 14:28:22.847148] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x947c40) on tqpair(0x8db120): expected_datao=0, payload_size=4096 00:32:19.156 [2024-10-13 14:28:22.847153] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.847168] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:19.156 [2024-10-13 14:28:22.847173] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:19.421 [2024-10-13 14:28:22.888277] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.421 [2024-10-13 14:28:22.888291] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.421 [2024-10-13 14:28:22.888295] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.421 [2024-10-13 14:28:22.888300] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x947c40) on tqpair=0x8db120 00:32:19.421 [2024-10-13 14:28:22.888309] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:32:19.421 [2024-10-13 14:28:22.888315] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:32:19.421 [2024-10-13 14:28:22.888319] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:32:19.421 [2024-10-13 14:28:22.888328] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:32:19.421 [2024-10-13 14:28:22.888333] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:32:19.421 [2024-10-13 14:28:22.888338] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:32:19.421 [2024-10-13 14:28:22.888348] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:32:19.421 [2024-10-13 14:28:22.888356] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.421 [2024-10-13 14:28:22.888360] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.421 [2024-10-13 14:28:22.888363] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8db120) 00:32:19.421 [2024-10-13 14:28:22.888372] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:19.421 [2024-10-13 14:28:22.888386] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x947c40, cid 0, qid 0 00:32:19.421 [2024-10-13 14:28:22.888606] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.421 [2024-10-13 14:28:22.888613] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.421 [2024-10-13 14:28:22.888616] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.421 [2024-10-13 14:28:22.888620] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x947c40) on tqpair=0x8db120 00:32:19.421 [2024-10-13 14:28:22.888627] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.421 [2024-10-13 14:28:22.888631] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.421 [2024-10-13 14:28:22.888635] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8db120) 00:32:19.421 [2024-10-13 14:28:22.888641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.421 [2024-10-13 14:28:22.888648] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.421 [2024-10-13 14:28:22.888651] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.421 [2024-10-13 14:28:22.888655] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8db120) 00:32:19.421 [2024-10-13 14:28:22.888661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.421 [2024-10-13 14:28:22.888667] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.421 [2024-10-13 14:28:22.888671] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.421 [2024-10-13 14:28:22.888675] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8db120) 00:32:19.421 [2024-10-13 14:28:22.888680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.421 [2024-10-13 14:28:22.888687] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.421 [2024-10-13 14:28:22.888690] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.421 [2024-10-13 14:28:22.888694] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8db120) 00:32:19.421 [2024-10-13 14:28:22.888700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.421 [2024-10-13 14:28:22.888704] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:32:19.421 [2024-10-13 14:28:22.888718] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:32:19.421 [2024-10-13 14:28:22.888725] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.421 [2024-10-13 14:28:22.888729] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8db120) 00:32:19.421 [2024-10-13 14:28:22.888736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.421 [2024-10-13 14:28:22.888750] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x947c40, cid 0, qid 0 00:32:19.421 [2024-10-13 14:28:22.888756] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x947dc0, cid 1, qid 0 00:32:19.421 [2024-10-13 14:28:22.888760] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x947f40, cid 2, qid 0 00:32:19.421 [2024-10-13 14:28:22.888765] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9480c0, cid 3, qid 0 00:32:19.421 [2024-10-13 14:28:22.888770] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x948240, cid 4, qid 0 00:32:19.421 [2024-10-13 14:28:22.889015] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.421 [2024-10-13 14:28:22.889021] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.421 [2024-10-13 14:28:22.889025] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.421 [2024-10-13 14:28:22.889029] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x948240) on tqpair=0x8db120 00:32:19.421 [2024-10-13 14:28:22.889034] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:32:19.421 [2024-10-13 14:28:22.889039] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:32:19.421 [2024-10-13 14:28:22.889050] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:32:19.421 [2024-10-13 14:28:22.893070] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:32:19.421 [2024-10-13 14:28:22.893078] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.421 [2024-10-13 14:28:22.893082] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.421 [2024-10-13 14:28:22.893085] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8db120) 00:32:19.421 [2024-10-13 14:28:22.893092] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:19.421 [2024-10-13 14:28:22.893103] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x948240, cid 4, qid 0 00:32:19.421 [2024-10-13 14:28:22.893323] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.421 [2024-10-13 14:28:22.893329] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.421 [2024-10-13 14:28:22.893332] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.421 [2024-10-13 14:28:22.893336] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x948240) on tqpair=0x8db120 00:32:19.421 [2024-10-13 14:28:22.893404] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:32:19.421 [2024-10-13 14:28:22.893414] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:32:19.421 [2024-10-13 14:28:22.893421] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.421 [2024-10-13 14:28:22.893425] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8db120) 00:32:19.422 [2024-10-13 14:28:22.893432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.422 [2024-10-13 14:28:22.893442] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x948240, cid 4, qid 0 00:32:19.422 [2024-10-13 14:28:22.893677] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:19.422 [2024-10-13 14:28:22.893684] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:19.422 [2024-10-13 14:28:22.893687] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:22.893692] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8db120): datao=0, datal=4096, cccid=4 00:32:19.422 [2024-10-13 14:28:22.893699] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x948240) on tqpair(0x8db120): expected_datao=0, payload_size=4096 00:32:19.422 [2024-10-13 14:28:22.893703] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:22.893718] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:22.893722] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:22.934244] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.422 [2024-10-13 14:28:22.934257] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.422 [2024-10-13 14:28:22.934260] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:22.934264] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x948240) on tqpair=0x8db120 00:32:19.422 [2024-10-13 14:28:22.934276] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:32:19.422 [2024-10-13 14:28:22.934289] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:32:19.422 [2024-10-13 14:28:22.934299] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:32:19.422 [2024-10-13 14:28:22.934306] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:22.934310] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8db120) 00:32:19.422 [2024-10-13 14:28:22.934317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.422 [2024-10-13 14:28:22.934330] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x948240, cid 4, qid 0 00:32:19.422 [2024-10-13 14:28:22.934544] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:19.422 [2024-10-13 14:28:22.934550] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:19.422 [2024-10-13 14:28:22.934554] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:22.934557] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8db120): datao=0, datal=4096, cccid=4 00:32:19.422 [2024-10-13 14:28:22.934562] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x948240) on tqpair(0x8db120): expected_datao=0, payload_size=4096 00:32:19.422 [2024-10-13 14:28:22.934567] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:22.934580] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:22.934585] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:22.978071] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.422 [2024-10-13 14:28:22.978081] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.422 [2024-10-13 14:28:22.978084] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:22.978088] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x948240) on tqpair=0x8db120 00:32:19.422 [2024-10-13 14:28:22.978104] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:32:19.422 [2024-10-13 14:28:22.978115] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:32:19.422 [2024-10-13 14:28:22.978122] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:22.978126] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8db120) 00:32:19.422 [2024-10-13 14:28:22.978133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.422 [2024-10-13 14:28:22.978146] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x948240, cid 4, qid 0 00:32:19.422 [2024-10-13 14:28:22.978328] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:19.422 [2024-10-13 14:28:22.978337] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:19.422 [2024-10-13 14:28:22.978341] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:22.978345] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8db120): datao=0, datal=4096, cccid=4 00:32:19.422 [2024-10-13 14:28:22.978350] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x948240) on tqpair(0x8db120): expected_datao=0, payload_size=4096 00:32:19.422 [2024-10-13 14:28:22.978354] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:22.978366] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:22.978370] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:23.020253] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.422 [2024-10-13 14:28:23.020263] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.422 [2024-10-13 14:28:23.020267] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:23.020271] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x948240) on tqpair=0x8db120 00:32:19.422 [2024-10-13 14:28:23.020280] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:32:19.422 [2024-10-13 14:28:23.020288] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:32:19.422 [2024-10-13 14:28:23.020298] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:32:19.422 [2024-10-13 14:28:23.020305] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:32:19.422 [2024-10-13 14:28:23.020310] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:32:19.422 [2024-10-13 14:28:23.020315] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:32:19.422 [2024-10-13 14:28:23.020320] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:32:19.422 [2024-10-13 14:28:23.020325] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:32:19.422 [2024-10-13 14:28:23.020331] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:32:19.422 [2024-10-13 14:28:23.020348] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:23.020352] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8db120) 00:32:19.422 [2024-10-13 14:28:23.020360] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.422 [2024-10-13 14:28:23.020367] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:23.020371] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:23.020375] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8db120) 00:32:19.422 [2024-10-13 14:28:23.020381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:32:19.422 [2024-10-13 14:28:23.020395] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x948240, cid 4, qid 0 00:32:19.422 [2024-10-13 14:28:23.020400] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9483c0, cid 5, qid 0 00:32:19.422 [2024-10-13 14:28:23.020587] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.422 [2024-10-13 14:28:23.020593] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.422 [2024-10-13 14:28:23.020597] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:23.020601] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x948240) on tqpair=0x8db120 00:32:19.422 [2024-10-13 14:28:23.020611] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.422 [2024-10-13 14:28:23.020617] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.422 [2024-10-13 14:28:23.020621] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:23.020625] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9483c0) on tqpair=0x8db120 00:32:19.422 [2024-10-13 14:28:23.020634] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:23.020638] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8db120) 00:32:19.422 [2024-10-13 14:28:23.020644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.422 [2024-10-13 14:28:23.020654] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9483c0, cid 5, qid 0 00:32:19.422 [2024-10-13 14:28:23.020867] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.422 [2024-10-13 14:28:23.020873] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.422 [2024-10-13 14:28:23.020876] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:23.020880] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9483c0) on tqpair=0x8db120 00:32:19.422 [2024-10-13 14:28:23.020890] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:23.020894] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8db120) 00:32:19.422 [2024-10-13 14:28:23.020900] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.422 [2024-10-13 14:28:23.020911] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9483c0, cid 5, qid 0 00:32:19.422 [2024-10-13 14:28:23.021089] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.422 [2024-10-13 14:28:23.021096] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.422 [2024-10-13 14:28:23.021100] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:23.021104] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9483c0) on tqpair=0x8db120 00:32:19.422 [2024-10-13 14:28:23.021113] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:23.021117] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8db120) 00:32:19.422 [2024-10-13 14:28:23.021123] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.422 [2024-10-13 14:28:23.021133] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9483c0, cid 5, qid 0 00:32:19.422 [2024-10-13 14:28:23.021357] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.422 [2024-10-13 14:28:23.021363] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.422 [2024-10-13 14:28:23.021367] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:23.021371] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9483c0) on tqpair=0x8db120 00:32:19.422 [2024-10-13 14:28:23.021387] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:23.021391] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8db120) 00:32:19.422 [2024-10-13 14:28:23.021398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.422 [2024-10-13 14:28:23.021405] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.422 [2024-10-13 14:28:23.021409] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8db120) 00:32:19.423 [2024-10-13 14:28:23.021415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.423 [2024-10-13 14:28:23.021425] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.021428] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x8db120) 00:32:19.423 [2024-10-13 14:28:23.021435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.423 [2024-10-13 14:28:23.021444] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.021448] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8db120) 00:32:19.423 [2024-10-13 14:28:23.021454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.423 [2024-10-13 14:28:23.021466] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9483c0, cid 5, qid 0 00:32:19.423 [2024-10-13 14:28:23.021471] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x948240, cid 4, qid 0 00:32:19.423 [2024-10-13 14:28:23.021476] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x948540, cid 6, qid 0 00:32:19.423 [2024-10-13 14:28:23.021480] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9486c0, cid 7, qid 0 00:32:19.423 [2024-10-13 14:28:23.021803] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:19.423 [2024-10-13 14:28:23.021809] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:19.423 [2024-10-13 14:28:23.021813] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.021817] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8db120): datao=0, datal=8192, cccid=5 00:32:19.423 [2024-10-13 14:28:23.021821] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9483c0) on tqpair(0x8db120): expected_datao=0, payload_size=8192 00:32:19.423 [2024-10-13 14:28:23.021826] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.021900] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.021905] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.021911] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:19.423 [2024-10-13 14:28:23.021916] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:19.423 [2024-10-13 14:28:23.021920] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.021923] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8db120): datao=0, datal=512, cccid=4 00:32:19.423 [2024-10-13 14:28:23.021928] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x948240) on tqpair(0x8db120): expected_datao=0, payload_size=512 00:32:19.423 [2024-10-13 14:28:23.021932] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.021939] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.021942] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.021948] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:19.423 [2024-10-13 14:28:23.021954] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:19.423 [2024-10-13 14:28:23.021957] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.021961] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8db120): datao=0, datal=512, cccid=6 00:32:19.423 [2024-10-13 14:28:23.021965] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x948540) on tqpair(0x8db120): expected_datao=0, payload_size=512 00:32:19.423 [2024-10-13 14:28:23.021969] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.021976] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.021979] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.021985] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:19.423 [2024-10-13 14:28:23.021991] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:19.423 [2024-10-13 14:28:23.021994] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.022001] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8db120): datao=0, datal=4096, cccid=7 00:32:19.423 [2024-10-13 14:28:23.022005] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9486c0) on tqpair(0x8db120): expected_datao=0, payload_size=4096 00:32:19.423 [2024-10-13 14:28:23.022009] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.022017] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.022020] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.022028] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.423 [2024-10-13 14:28:23.022034] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.423 [2024-10-13 14:28:23.022038] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.022041] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9483c0) on tqpair=0x8db120 00:32:19.423 [2024-10-13 14:28:23.022054] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.423 [2024-10-13 14:28:23.022060] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.423 [2024-10-13 14:28:23.026076] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.026081] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x948240) on tqpair=0x8db120 00:32:19.423 [2024-10-13 14:28:23.026096] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.423 [2024-10-13 14:28:23.026102] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.423 [2024-10-13 14:28:23.026105] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.026109] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x948540) on tqpair=0x8db120 00:32:19.423 [2024-10-13 14:28:23.026116] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.423 [2024-10-13 14:28:23.026123] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.423 [2024-10-13 14:28:23.026126] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.423 [2024-10-13 14:28:23.026130] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9486c0) on tqpair=0x8db120 00:32:19.423 ===================================================== 00:32:19.423 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:19.423 ===================================================== 00:32:19.423 Controller Capabilities/Features 00:32:19.423 ================================ 00:32:19.423 Vendor ID: 8086 00:32:19.423 Subsystem Vendor ID: 8086 00:32:19.423 Serial Number: SPDK00000000000001 00:32:19.423 Model Number: SPDK bdev Controller 00:32:19.423 Firmware Version: 25.01 00:32:19.423 Recommended Arb Burst: 6 00:32:19.423 IEEE OUI Identifier: e4 d2 5c 00:32:19.423 Multi-path I/O 00:32:19.423 May have multiple subsystem ports: Yes 00:32:19.423 May have multiple controllers: Yes 00:32:19.423 Associated with SR-IOV VF: No 00:32:19.423 Max Data Transfer Size: 131072 00:32:19.423 Max Number of Namespaces: 32 00:32:19.423 Max Number of I/O Queues: 127 00:32:19.423 NVMe Specification Version (VS): 1.3 00:32:19.423 NVMe Specification Version (Identify): 1.3 00:32:19.423 Maximum Queue Entries: 128 00:32:19.423 Contiguous Queues Required: Yes 00:32:19.423 Arbitration Mechanisms Supported 00:32:19.423 Weighted Round Robin: Not Supported 00:32:19.423 Vendor Specific: Not Supported 00:32:19.423 Reset Timeout: 15000 ms 00:32:19.423 Doorbell Stride: 4 bytes 00:32:19.423 NVM Subsystem Reset: Not Supported 00:32:19.423 Command Sets Supported 00:32:19.423 NVM Command Set: Supported 00:32:19.423 Boot Partition: Not Supported 00:32:19.423 Memory Page Size Minimum: 4096 bytes 00:32:19.423 Memory Page Size Maximum: 4096 bytes 00:32:19.423 Persistent Memory Region: Not Supported 00:32:19.423 Optional Asynchronous Events Supported 00:32:19.423 Namespace Attribute Notices: Supported 00:32:19.423 Firmware Activation Notices: Not Supported 00:32:19.423 ANA Change Notices: Not Supported 00:32:19.423 PLE Aggregate Log Change Notices: Not Supported 00:32:19.423 LBA Status Info Alert Notices: Not Supported 00:32:19.423 EGE Aggregate Log Change Notices: Not Supported 00:32:19.423 Normal NVM Subsystem Shutdown event: Not Supported 00:32:19.423 Zone Descriptor Change Notices: Not Supported 00:32:19.423 Discovery Log Change Notices: Not Supported 00:32:19.423 Controller Attributes 00:32:19.423 128-bit Host Identifier: Supported 00:32:19.423 Non-Operational Permissive Mode: Not Supported 00:32:19.423 NVM Sets: Not Supported 00:32:19.423 Read Recovery Levels: Not Supported 00:32:19.423 Endurance Groups: Not Supported 00:32:19.423 Predictable Latency Mode: Not Supported 00:32:19.423 Traffic Based Keep ALive: Not Supported 00:32:19.423 Namespace Granularity: Not Supported 00:32:19.423 SQ Associations: Not Supported 00:32:19.423 UUID List: Not Supported 00:32:19.423 Multi-Domain Subsystem: Not Supported 00:32:19.423 Fixed Capacity Management: Not Supported 00:32:19.423 Variable Capacity Management: Not Supported 00:32:19.423 Delete Endurance Group: Not Supported 00:32:19.423 Delete NVM Set: Not Supported 00:32:19.423 Extended LBA Formats Supported: Not Supported 00:32:19.423 Flexible Data Placement Supported: Not Supported 00:32:19.423 00:32:19.423 Controller Memory Buffer Support 00:32:19.423 ================================ 00:32:19.423 Supported: No 00:32:19.423 00:32:19.423 Persistent Memory Region Support 00:32:19.423 ================================ 00:32:19.423 Supported: No 00:32:19.423 00:32:19.423 Admin Command Set Attributes 00:32:19.423 ============================ 00:32:19.423 Security Send/Receive: Not Supported 00:32:19.423 Format NVM: Not Supported 00:32:19.423 Firmware Activate/Download: Not Supported 00:32:19.423 Namespace Management: Not Supported 00:32:19.423 Device Self-Test: Not Supported 00:32:19.423 Directives: Not Supported 00:32:19.423 NVMe-MI: Not Supported 00:32:19.423 Virtualization Management: Not Supported 00:32:19.423 Doorbell Buffer Config: Not Supported 00:32:19.423 Get LBA Status Capability: Not Supported 00:32:19.423 Command & Feature Lockdown Capability: Not Supported 00:32:19.423 Abort Command Limit: 4 00:32:19.424 Async Event Request Limit: 4 00:32:19.424 Number of Firmware Slots: N/A 00:32:19.424 Firmware Slot 1 Read-Only: N/A 00:32:19.424 Firmware Activation Without Reset: N/A 00:32:19.424 Multiple Update Detection Support: N/A 00:32:19.424 Firmware Update Granularity: No Information Provided 00:32:19.424 Per-Namespace SMART Log: No 00:32:19.424 Asymmetric Namespace Access Log Page: Not Supported 00:32:19.424 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:32:19.424 Command Effects Log Page: Supported 00:32:19.424 Get Log Page Extended Data: Supported 00:32:19.424 Telemetry Log Pages: Not Supported 00:32:19.424 Persistent Event Log Pages: Not Supported 00:32:19.424 Supported Log Pages Log Page: May Support 00:32:19.424 Commands Supported & Effects Log Page: Not Supported 00:32:19.424 Feature Identifiers & Effects Log Page:May Support 00:32:19.424 NVMe-MI Commands & Effects Log Page: May Support 00:32:19.424 Data Area 4 for Telemetry Log: Not Supported 00:32:19.424 Error Log Page Entries Supported: 128 00:32:19.424 Keep Alive: Supported 00:32:19.424 Keep Alive Granularity: 10000 ms 00:32:19.424 00:32:19.424 NVM Command Set Attributes 00:32:19.424 ========================== 00:32:19.424 Submission Queue Entry Size 00:32:19.424 Max: 64 00:32:19.424 Min: 64 00:32:19.424 Completion Queue Entry Size 00:32:19.424 Max: 16 00:32:19.424 Min: 16 00:32:19.424 Number of Namespaces: 32 00:32:19.424 Compare Command: Supported 00:32:19.424 Write Uncorrectable Command: Not Supported 00:32:19.424 Dataset Management Command: Supported 00:32:19.424 Write Zeroes Command: Supported 00:32:19.424 Set Features Save Field: Not Supported 00:32:19.424 Reservations: Supported 00:32:19.424 Timestamp: Not Supported 00:32:19.424 Copy: Supported 00:32:19.424 Volatile Write Cache: Present 00:32:19.424 Atomic Write Unit (Normal): 1 00:32:19.424 Atomic Write Unit (PFail): 1 00:32:19.424 Atomic Compare & Write Unit: 1 00:32:19.424 Fused Compare & Write: Supported 00:32:19.424 Scatter-Gather List 00:32:19.424 SGL Command Set: Supported 00:32:19.424 SGL Keyed: Supported 00:32:19.424 SGL Bit Bucket Descriptor: Not Supported 00:32:19.424 SGL Metadata Pointer: Not Supported 00:32:19.424 Oversized SGL: Not Supported 00:32:19.424 SGL Metadata Address: Not Supported 00:32:19.424 SGL Offset: Supported 00:32:19.424 Transport SGL Data Block: Not Supported 00:32:19.424 Replay Protected Memory Block: Not Supported 00:32:19.424 00:32:19.424 Firmware Slot Information 00:32:19.424 ========================= 00:32:19.424 Active slot: 1 00:32:19.424 Slot 1 Firmware Revision: 25.01 00:32:19.424 00:32:19.424 00:32:19.424 Commands Supported and Effects 00:32:19.424 ============================== 00:32:19.424 Admin Commands 00:32:19.424 -------------- 00:32:19.424 Get Log Page (02h): Supported 00:32:19.424 Identify (06h): Supported 00:32:19.424 Abort (08h): Supported 00:32:19.424 Set Features (09h): Supported 00:32:19.424 Get Features (0Ah): Supported 00:32:19.424 Asynchronous Event Request (0Ch): Supported 00:32:19.424 Keep Alive (18h): Supported 00:32:19.424 I/O Commands 00:32:19.424 ------------ 00:32:19.424 Flush (00h): Supported LBA-Change 00:32:19.424 Write (01h): Supported LBA-Change 00:32:19.424 Read (02h): Supported 00:32:19.424 Compare (05h): Supported 00:32:19.424 Write Zeroes (08h): Supported LBA-Change 00:32:19.424 Dataset Management (09h): Supported LBA-Change 00:32:19.424 Copy (19h): Supported LBA-Change 00:32:19.424 00:32:19.424 Error Log 00:32:19.424 ========= 00:32:19.424 00:32:19.424 Arbitration 00:32:19.424 =========== 00:32:19.424 Arbitration Burst: 1 00:32:19.424 00:32:19.424 Power Management 00:32:19.424 ================ 00:32:19.424 Number of Power States: 1 00:32:19.424 Current Power State: Power State #0 00:32:19.424 Power State #0: 00:32:19.424 Max Power: 0.00 W 00:32:19.424 Non-Operational State: Operational 00:32:19.424 Entry Latency: Not Reported 00:32:19.424 Exit Latency: Not Reported 00:32:19.424 Relative Read Throughput: 0 00:32:19.424 Relative Read Latency: 0 00:32:19.424 Relative Write Throughput: 0 00:32:19.424 Relative Write Latency: 0 00:32:19.424 Idle Power: Not Reported 00:32:19.424 Active Power: Not Reported 00:32:19.424 Non-Operational Permissive Mode: Not Supported 00:32:19.424 00:32:19.424 Health Information 00:32:19.424 ================== 00:32:19.424 Critical Warnings: 00:32:19.424 Available Spare Space: OK 00:32:19.424 Temperature: OK 00:32:19.424 Device Reliability: OK 00:32:19.424 Read Only: No 00:32:19.424 Volatile Memory Backup: OK 00:32:19.424 Current Temperature: 0 Kelvin (-273 Celsius) 00:32:19.424 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:32:19.424 Available Spare: 0% 00:32:19.424 Available Spare Threshold: 0% 00:32:19.424 Life Percentage Used:[2024-10-13 14:28:23.026235] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.424 [2024-10-13 14:28:23.026241] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8db120) 00:32:19.424 [2024-10-13 14:28:23.026248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.424 [2024-10-13 14:28:23.026262] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9486c0, cid 7, qid 0 00:32:19.424 [2024-10-13 14:28:23.026480] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.424 [2024-10-13 14:28:23.026487] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.424 [2024-10-13 14:28:23.026490] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.424 [2024-10-13 14:28:23.026494] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9486c0) on tqpair=0x8db120 00:32:19.424 [2024-10-13 14:28:23.026532] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:32:19.424 [2024-10-13 14:28:23.026543] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x947c40) on tqpair=0x8db120 00:32:19.424 [2024-10-13 14:28:23.026549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.424 [2024-10-13 14:28:23.026555] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x947dc0) on tqpair=0x8db120 00:32:19.424 [2024-10-13 14:28:23.026560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.424 [2024-10-13 14:28:23.026565] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x947f40) on tqpair=0x8db120 00:32:19.424 [2024-10-13 14:28:23.026569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.424 [2024-10-13 14:28:23.026578] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9480c0) on tqpair=0x8db120 00:32:19.424 [2024-10-13 14:28:23.026583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:19.424 [2024-10-13 14:28:23.026593] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.424 [2024-10-13 14:28:23.026599] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.424 [2024-10-13 14:28:23.026604] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8db120) 00:32:19.424 [2024-10-13 14:28:23.026613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.424 [2024-10-13 14:28:23.026626] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9480c0, cid 3, qid 0 00:32:19.424 [2024-10-13 14:28:23.026811] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.424 [2024-10-13 14:28:23.026817] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.424 [2024-10-13 14:28:23.026821] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.424 [2024-10-13 14:28:23.026825] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9480c0) on tqpair=0x8db120 00:32:19.424 [2024-10-13 14:28:23.026832] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.424 [2024-10-13 14:28:23.026836] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.424 [2024-10-13 14:28:23.026839] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8db120) 00:32:19.424 [2024-10-13 14:28:23.026846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.424 [2024-10-13 14:28:23.026859] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9480c0, cid 3, qid 0 00:32:19.424 [2024-10-13 14:28:23.027094] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.424 [2024-10-13 14:28:23.027101] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.425 [2024-10-13 14:28:23.027104] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.027108] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9480c0) on tqpair=0x8db120 00:32:19.425 [2024-10-13 14:28:23.027113] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:32:19.425 [2024-10-13 14:28:23.027118] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:32:19.425 [2024-10-13 14:28:23.027127] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.027131] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.027134] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8db120) 00:32:19.425 [2024-10-13 14:28:23.027141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.425 [2024-10-13 14:28:23.027152] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9480c0, cid 3, qid 0 00:32:19.425 [2024-10-13 14:28:23.027365] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.425 [2024-10-13 14:28:23.027371] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.425 [2024-10-13 14:28:23.027375] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.027379] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9480c0) on tqpair=0x8db120 00:32:19.425 [2024-10-13 14:28:23.027389] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.027393] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.027396] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8db120) 00:32:19.425 [2024-10-13 14:28:23.027403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.425 [2024-10-13 14:28:23.027415] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9480c0, cid 3, qid 0 00:32:19.425 [2024-10-13 14:28:23.027595] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.425 [2024-10-13 14:28:23.027601] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.425 [2024-10-13 14:28:23.027605] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.027609] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9480c0) on tqpair=0x8db120 00:32:19.425 [2024-10-13 14:28:23.027619] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.027622] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.027626] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8db120) 00:32:19.425 [2024-10-13 14:28:23.027633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.425 [2024-10-13 14:28:23.027643] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9480c0, cid 3, qid 0 00:32:19.425 [2024-10-13 14:28:23.027837] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.425 [2024-10-13 14:28:23.027844] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.425 [2024-10-13 14:28:23.027847] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.027851] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9480c0) on tqpair=0x8db120 00:32:19.425 [2024-10-13 14:28:23.027861] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.027864] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.027868] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8db120) 00:32:19.425 [2024-10-13 14:28:23.027875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.425 [2024-10-13 14:28:23.027885] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9480c0, cid 3, qid 0 00:32:19.425 [2024-10-13 14:28:23.028081] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.425 [2024-10-13 14:28:23.028088] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.425 [2024-10-13 14:28:23.028091] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.028095] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9480c0) on tqpair=0x8db120 00:32:19.425 [2024-10-13 14:28:23.028105] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.028109] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.028112] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8db120) 00:32:19.425 [2024-10-13 14:28:23.028119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.425 [2024-10-13 14:28:23.028129] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9480c0, cid 3, qid 0 00:32:19.425 [2024-10-13 14:28:23.028306] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.425 [2024-10-13 14:28:23.028312] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.425 [2024-10-13 14:28:23.028315] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.028319] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9480c0) on tqpair=0x8db120 00:32:19.425 [2024-10-13 14:28:23.028329] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.028333] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.028337] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8db120) 00:32:19.425 [2024-10-13 14:28:23.028343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.425 [2024-10-13 14:28:23.028354] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9480c0, cid 3, qid 0 00:32:19.425 [2024-10-13 14:28:23.028522] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.425 [2024-10-13 14:28:23.028528] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.425 [2024-10-13 14:28:23.028532] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.028536] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9480c0) on tqpair=0x8db120 00:32:19.425 [2024-10-13 14:28:23.028545] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.028549] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.028553] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8db120) 00:32:19.425 [2024-10-13 14:28:23.028560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.425 [2024-10-13 14:28:23.028570] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9480c0, cid 3, qid 0 00:32:19.425 [2024-10-13 14:28:23.028788] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.425 [2024-10-13 14:28:23.028794] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.425 [2024-10-13 14:28:23.028798] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.028802] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9480c0) on tqpair=0x8db120 00:32:19.425 [2024-10-13 14:28:23.028812] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.028815] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.028819] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8db120) 00:32:19.425 [2024-10-13 14:28:23.028826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.425 [2024-10-13 14:28:23.028836] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9480c0, cid 3, qid 0 00:32:19.425 [2024-10-13 14:28:23.029019] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.425 [2024-10-13 14:28:23.029025] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.425 [2024-10-13 14:28:23.029029] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.029033] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9480c0) on tqpair=0x8db120 00:32:19.425 [2024-10-13 14:28:23.029042] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.029046] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.029050] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8db120) 00:32:19.425 [2024-10-13 14:28:23.029056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.425 [2024-10-13 14:28:23.029074] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9480c0, cid 3, qid 0 00:32:19.425 [2024-10-13 14:28:23.029251] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.425 [2024-10-13 14:28:23.029257] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.425 [2024-10-13 14:28:23.029260] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.029264] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9480c0) on tqpair=0x8db120 00:32:19.425 [2024-10-13 14:28:23.029274] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.029278] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.029281] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8db120) 00:32:19.425 [2024-10-13 14:28:23.029288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.425 [2024-10-13 14:28:23.029298] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9480c0, cid 3, qid 0 00:32:19.425 [2024-10-13 14:28:23.029484] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.425 [2024-10-13 14:28:23.029491] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.425 [2024-10-13 14:28:23.029497] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.029501] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9480c0) on tqpair=0x8db120 00:32:19.425 [2024-10-13 14:28:23.029510] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.029514] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.029518] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8db120) 00:32:19.425 [2024-10-13 14:28:23.029525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.425 [2024-10-13 14:28:23.029535] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9480c0, cid 3, qid 0 00:32:19.425 [2024-10-13 14:28:23.029752] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.425 [2024-10-13 14:28:23.029758] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.425 [2024-10-13 14:28:23.029761] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.029765] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9480c0) on tqpair=0x8db120 00:32:19.425 [2024-10-13 14:28:23.029775] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.029779] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.029783] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8db120) 00:32:19.425 [2024-10-13 14:28:23.029790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.425 [2024-10-13 14:28:23.029800] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9480c0, cid 3, qid 0 00:32:19.425 [2024-10-13 14:28:23.029979] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.425 [2024-10-13 14:28:23.029985] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.425 [2024-10-13 14:28:23.029989] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.425 [2024-10-13 14:28:23.029993] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9480c0) on tqpair=0x8db120 00:32:19.425 [2024-10-13 14:28:23.030002] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:19.426 [2024-10-13 14:28:23.030006] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:19.426 [2024-10-13 14:28:23.030010] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8db120) 00:32:19.426 [2024-10-13 14:28:23.030016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.426 [2024-10-13 14:28:23.030027] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9480c0, cid 3, qid 0 00:32:19.426 [2024-10-13 14:28:23.034073] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:19.426 [2024-10-13 14:28:23.034081] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:19.426 [2024-10-13 14:28:23.034084] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:19.426 [2024-10-13 14:28:23.034088] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9480c0) on tqpair=0x8db120 00:32:19.426 [2024-10-13 14:28:23.034096] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:32:19.426 0% 00:32:19.426 Data Units Read: 0 00:32:19.426 Data Units Written: 0 00:32:19.426 Host Read Commands: 0 00:32:19.426 Host Write Commands: 0 00:32:19.426 Controller Busy Time: 0 minutes 00:32:19.426 Power Cycles: 0 00:32:19.426 Power On Hours: 0 hours 00:32:19.426 Unsafe Shutdowns: 0 00:32:19.426 Unrecoverable Media Errors: 0 00:32:19.426 Lifetime Error Log Entries: 0 00:32:19.426 Warning Temperature Time: 0 minutes 00:32:19.426 Critical Temperature Time: 0 minutes 00:32:19.426 00:32:19.426 Number of Queues 00:32:19.426 ================ 00:32:19.426 Number of I/O Submission Queues: 127 00:32:19.426 Number of I/O Completion Queues: 127 00:32:19.426 00:32:19.426 Active Namespaces 00:32:19.426 ================= 00:32:19.426 Namespace ID:1 00:32:19.426 Error Recovery Timeout: Unlimited 00:32:19.426 Command Set Identifier: NVM (00h) 00:32:19.426 Deallocate: Supported 00:32:19.426 Deallocated/Unwritten Error: Not Supported 00:32:19.426 Deallocated Read Value: Unknown 00:32:19.426 Deallocate in Write Zeroes: Not Supported 00:32:19.426 Deallocated Guard Field: 0xFFFF 00:32:19.426 Flush: Supported 00:32:19.426 Reservation: Supported 00:32:19.426 Namespace Sharing Capabilities: Multiple Controllers 00:32:19.426 Size (in LBAs): 131072 (0GiB) 00:32:19.426 Capacity (in LBAs): 131072 (0GiB) 00:32:19.426 Utilization (in LBAs): 131072 (0GiB) 00:32:19.426 NGUID: ABCDEF0123456789ABCDEF0123456789 00:32:19.426 EUI64: ABCDEF0123456789 00:32:19.426 UUID: 18b45bcb-ad36-4095-8bcc-752a8992a3c0 00:32:19.426 Thin Provisioning: Not Supported 00:32:19.426 Per-NS Atomic Units: Yes 00:32:19.426 Atomic Boundary Size (Normal): 0 00:32:19.426 Atomic Boundary Size (PFail): 0 00:32:19.426 Atomic Boundary Offset: 0 00:32:19.426 Maximum Single Source Range Length: 65535 00:32:19.426 Maximum Copy Length: 65535 00:32:19.426 Maximum Source Range Count: 1 00:32:19.426 NGUID/EUI64 Never Reused: No 00:32:19.426 Namespace Write Protected: No 00:32:19.426 Number of LBA Formats: 1 00:32:19.426 Current LBA Format: LBA Format #00 00:32:19.426 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:19.426 00:32:19.426 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:32:19.426 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:19.426 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.426 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:19.426 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.426 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:32:19.426 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:32:19.426 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:19.426 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:32:19.426 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:19.426 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:32:19.426 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:19.426 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:19.426 rmmod nvme_tcp 00:32:19.426 rmmod nvme_fabrics 00:32:19.426 rmmod nvme_keyring 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 1877818 ']' 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 1877818 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1877818 ']' 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1877818 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1877818 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1877818' 00:32:19.687 killing process with pid 1877818 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1877818 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1877818 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:19.687 14:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:22.236 00:32:22.236 real 0m12.223s 00:32:22.236 user 0m9.595s 00:32:22.236 sys 0m6.348s 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:22.236 ************************************ 00:32:22.236 END TEST nvmf_identify 00:32:22.236 ************************************ 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.236 ************************************ 00:32:22.236 START TEST nvmf_perf 00:32:22.236 ************************************ 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:22.236 * Looking for test storage... 00:32:22.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:32:22.236 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:22.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.237 --rc genhtml_branch_coverage=1 00:32:22.237 --rc genhtml_function_coverage=1 00:32:22.237 --rc genhtml_legend=1 00:32:22.237 --rc geninfo_all_blocks=1 00:32:22.237 --rc geninfo_unexecuted_blocks=1 00:32:22.237 00:32:22.237 ' 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:22.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.237 --rc genhtml_branch_coverage=1 00:32:22.237 --rc genhtml_function_coverage=1 00:32:22.237 --rc genhtml_legend=1 00:32:22.237 --rc geninfo_all_blocks=1 00:32:22.237 --rc geninfo_unexecuted_blocks=1 00:32:22.237 00:32:22.237 ' 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:22.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.237 --rc genhtml_branch_coverage=1 00:32:22.237 --rc genhtml_function_coverage=1 00:32:22.237 --rc genhtml_legend=1 00:32:22.237 --rc geninfo_all_blocks=1 00:32:22.237 --rc geninfo_unexecuted_blocks=1 00:32:22.237 00:32:22.237 ' 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:22.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.237 --rc genhtml_branch_coverage=1 00:32:22.237 --rc genhtml_function_coverage=1 00:32:22.237 --rc genhtml_legend=1 00:32:22.237 --rc geninfo_all_blocks=1 00:32:22.237 --rc geninfo_unexecuted_blocks=1 00:32:22.237 00:32:22.237 ' 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:22.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:32:22.237 14:28:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:30.384 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:30.384 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:30.384 Found net devices under 0000:31:00.0: cvl_0_0 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:30.384 Found net devices under 0000:31:00.1: cvl_0_1 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:30.384 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:30.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:30.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.720 ms 00:32:30.385 00:32:30.385 --- 10.0.0.2 ping statistics --- 00:32:30.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.385 rtt min/avg/max/mdev = 0.720/0.720/0.720/0.000 ms 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:30.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:30.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:32:30.385 00:32:30.385 --- 10.0.0.1 ping statistics --- 00:32:30.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.385 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=1882564 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 1882564 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1882564 ']' 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:30.385 14:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:30.385 [2024-10-13 14:28:33.588368] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:32:30.385 [2024-10-13 14:28:33.588430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:30.385 [2024-10-13 14:28:33.730912] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:30.385 [2024-10-13 14:28:33.779979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:30.385 [2024-10-13 14:28:33.808130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:30.385 [2024-10-13 14:28:33.808171] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:30.385 [2024-10-13 14:28:33.808179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:30.385 [2024-10-13 14:28:33.808186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:30.385 [2024-10-13 14:28:33.808192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:30.385 [2024-10-13 14:28:33.810143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.385 [2024-10-13 14:28:33.810301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:30.385 [2024-10-13 14:28:33.810452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.385 [2024-10-13 14:28:33.810452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:30.959 14:28:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:30.959 14:28:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:32:30.959 14:28:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:30.959 14:28:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:30.959 14:28:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:30.959 14:28:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:30.959 14:28:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:30.959 14:28:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:32:31.531 14:28:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:32:31.531 14:28:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:32:31.531 14:28:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:32:31.531 14:28:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:31.800 14:28:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:32:31.800 14:28:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:32:31.800 14:28:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:32:31.800 14:28:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:32:31.800 14:28:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:32.060 [2024-10-13 14:28:35.580979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.060 14:28:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:32.320 14:28:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:32.320 14:28:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:32.320 14:28:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:32.320 14:28:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:32.580 14:28:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.840 [2024-10-13 14:28:36.321959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.840 14:28:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:32.840 14:28:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:32:32.840 14:28:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:32:32.840 14:28:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:32:32.840 14:28:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:32:34.225 Initializing NVMe Controllers 00:32:34.225 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:32:34.225 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:32:34.225 Initialization complete. Launching workers. 00:32:34.225 ======================================================== 00:32:34.225 Latency(us) 00:32:34.225 Device Information : IOPS MiB/s Average min max 00:32:34.225 PCIE (0000:65:00.0) NSID 1 from core 0: 78099.25 305.08 409.14 13.43 7358.16 00:32:34.225 ======================================================== 00:32:34.225 Total : 78099.25 305.08 409.14 13.43 7358.16 00:32:34.225 00:32:34.225 14:28:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:35.616 Initializing NVMe Controllers 00:32:35.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:35.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:35.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:35.616 Initialization complete. Launching workers. 00:32:35.616 ======================================================== 00:32:35.616 Latency(us) 00:32:35.616 Device Information : IOPS MiB/s Average min max 00:32:35.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 86.74 0.34 11565.88 241.77 45844.55 00:32:35.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 48.85 0.19 21121.40 7975.32 48001.80 00:32:35.616 ======================================================== 00:32:35.616 Total : 135.60 0.53 15008.67 241.77 48001.80 00:32:35.616 00:32:35.616 14:28:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:37.003 Initializing NVMe Controllers 00:32:37.003 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:37.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:37.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:37.003 Initialization complete. Launching workers. 00:32:37.003 ======================================================== 00:32:37.003 Latency(us) 00:32:37.003 Device Information : IOPS MiB/s Average min max 00:32:37.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12326.99 48.15 2597.53 328.16 8805.37 00:32:37.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3756.00 14.67 8718.00 4941.12 55995.16 00:32:37.003 ======================================================== 00:32:37.003 Total : 16082.98 62.82 4026.90 328.16 55995.16 00:32:37.003 00:32:37.003 14:28:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:32:37.003 14:28:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:32:37.003 14:28:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:40.304 Initializing NVMe Controllers 00:32:40.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:40.304 Controller IO queue size 128, less than required. 00:32:40.304 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:40.304 Controller IO queue size 128, less than required. 00:32:40.304 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:40.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:40.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:40.304 Initialization complete. Launching workers. 00:32:40.304 ======================================================== 00:32:40.304 Latency(us) 00:32:40.304 Device Information : IOPS MiB/s Average min max 00:32:40.304 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2065.08 516.27 63051.73 33503.53 111830.56 00:32:40.304 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 609.20 152.30 219265.72 56283.23 331248.67 00:32:40.304 ======================================================== 00:32:40.304 Total : 2674.28 668.57 98637.04 33503.53 331248.67 00:32:40.304 00:32:40.304 14:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:32:40.304 No valid NVMe controllers or AIO or URING devices found 00:32:40.304 Initializing NVMe Controllers 00:32:40.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:40.304 Controller IO queue size 128, less than required. 00:32:40.304 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:40.304 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:32:40.304 Controller IO queue size 128, less than required. 00:32:40.304 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:40.304 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:32:40.304 WARNING: Some requested NVMe devices were skipped 00:32:40.304 14:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:32:42.851 Initializing NVMe Controllers 00:32:42.851 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:42.851 Controller IO queue size 128, less than required. 00:32:42.851 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:42.851 Controller IO queue size 128, less than required. 00:32:42.851 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:42.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:42.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:42.851 Initialization complete. Launching workers. 00:32:42.851 00:32:42.851 ==================== 00:32:42.851 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:32:42.851 TCP transport: 00:32:42.851 polls: 30203 00:32:42.851 idle_polls: 20277 00:32:42.851 sock_completions: 9926 00:32:42.851 nvme_completions: 8579 00:32:42.851 submitted_requests: 12834 00:32:42.851 queued_requests: 1 00:32:42.851 00:32:42.851 ==================== 00:32:42.851 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:32:42.851 TCP transport: 00:32:42.851 polls: 29541 00:32:42.851 idle_polls: 20830 00:32:42.851 sock_completions: 8711 00:32:42.851 nvme_completions: 8737 00:32:42.851 submitted_requests: 13172 00:32:42.851 queued_requests: 1 00:32:42.851 ======================================================== 00:32:42.851 Latency(us) 00:32:42.851 Device Information : IOPS MiB/s Average min max 00:32:42.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2141.95 535.49 60920.77 36612.53 103946.35 00:32:42.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2181.40 545.35 58960.37 29703.59 109857.41 00:32:42.851 ======================================================== 00:32:42.851 Total : 4323.36 1080.84 59931.62 29703.59 109857.41 00:32:42.851 00:32:42.851 14:28:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:32:42.851 14:28:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:43.113 14:28:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:32:43.113 14:28:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:32:43.113 14:28:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:32:44.500 14:28:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=39834f70-9156-47f5-8ea2-9ee1ecbe2722 00:32:44.500 14:28:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 39834f70-9156-47f5-8ea2-9ee1ecbe2722 00:32:44.500 14:28:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=39834f70-9156-47f5-8ea2-9ee1ecbe2722 00:32:44.500 14:28:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:44.500 14:28:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:32:44.500 14:28:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:32:44.500 14:28:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:44.500 14:28:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:44.500 { 00:32:44.500 "uuid": "39834f70-9156-47f5-8ea2-9ee1ecbe2722", 00:32:44.500 "name": "lvs_0", 00:32:44.500 "base_bdev": "Nvme0n1", 00:32:44.500 "total_data_clusters": 457407, 00:32:44.500 "free_clusters": 457407, 00:32:44.500 "block_size": 512, 00:32:44.500 "cluster_size": 4194304 00:32:44.500 } 00:32:44.500 ]' 00:32:44.500 14:28:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="39834f70-9156-47f5-8ea2-9ee1ecbe2722") .free_clusters' 00:32:44.500 14:28:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=457407 00:32:44.500 14:28:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="39834f70-9156-47f5-8ea2-9ee1ecbe2722") .cluster_size' 00:32:44.500 14:28:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:44.500 14:28:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=1829628 00:32:44.500 14:28:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 1829628 00:32:44.500 1829628 00:32:44.500 14:28:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:32:44.500 14:28:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:32:44.500 14:28:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 39834f70-9156-47f5-8ea2-9ee1ecbe2722 lbd_0 20480 00:32:44.761 14:28:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=fb922fc0-311b-4eaf-bd9b-abc970400f7a 00:32:44.761 14:28:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore fb922fc0-311b-4eaf-bd9b-abc970400f7a lvs_n_0 00:32:46.146 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=85c28ce6-b58b-4d51-b713-9dcb1f22edd4 00:32:46.146 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 85c28ce6-b58b-4d51-b713-9dcb1f22edd4 00:32:46.146 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=85c28ce6-b58b-4d51-b713-9dcb1f22edd4 00:32:46.146 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:46.146 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:32:46.146 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:32:46.146 14:28:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:46.406 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:46.406 { 00:32:46.406 "uuid": "39834f70-9156-47f5-8ea2-9ee1ecbe2722", 00:32:46.406 "name": "lvs_0", 00:32:46.406 "base_bdev": "Nvme0n1", 00:32:46.406 "total_data_clusters": 457407, 00:32:46.406 "free_clusters": 452287, 00:32:46.406 "block_size": 512, 00:32:46.406 "cluster_size": 4194304 00:32:46.406 }, 00:32:46.406 { 00:32:46.406 "uuid": "85c28ce6-b58b-4d51-b713-9dcb1f22edd4", 00:32:46.406 "name": "lvs_n_0", 00:32:46.406 "base_bdev": "fb922fc0-311b-4eaf-bd9b-abc970400f7a", 00:32:46.406 "total_data_clusters": 5114, 00:32:46.406 "free_clusters": 5114, 00:32:46.406 "block_size": 512, 00:32:46.406 "cluster_size": 4194304 00:32:46.406 } 00:32:46.406 ]' 00:32:46.406 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="85c28ce6-b58b-4d51-b713-9dcb1f22edd4") .free_clusters' 00:32:46.406 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:32:46.406 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="85c28ce6-b58b-4d51-b713-9dcb1f22edd4") .cluster_size' 00:32:46.406 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:46.407 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:32:46.407 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:32:46.407 20456 00:32:46.407 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:32:46.407 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 85c28ce6-b58b-4d51-b713-9dcb1f22edd4 lbd_nest_0 20456 00:32:46.667 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=b998b58c-2323-44ef-912a-1e076fab0ffa 00:32:46.667 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:46.928 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:32:46.928 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 b998b58c-2323-44ef-912a-1e076fab0ffa 00:32:47.188 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:47.188 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:32:47.188 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:32:47.188 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:47.188 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:47.188 14:28:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:59.429 Initializing NVMe Controllers 00:32:59.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:59.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:59.429 Initialization complete. Launching workers. 00:32:59.429 ======================================================== 00:32:59.429 Latency(us) 00:32:59.429 Device Information : IOPS MiB/s Average min max 00:32:59.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 48.69 0.02 20621.81 214.29 49677.33 00:32:59.429 ======================================================== 00:32:59.429 Total : 48.69 0.02 20621.81 214.29 49677.33 00:32:59.429 00:32:59.429 14:29:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:59.429 14:29:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:09.428 Initializing NVMe Controllers 00:33:09.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:09.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:09.429 Initialization complete. Launching workers. 00:33:09.429 ======================================================== 00:33:09.429 Latency(us) 00:33:09.429 Device Information : IOPS MiB/s Average min max 00:33:09.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 55.79 6.97 17967.28 6040.32 55999.22 00:33:09.429 ======================================================== 00:33:09.429 Total : 55.79 6.97 17967.28 6040.32 55999.22 00:33:09.429 00:33:09.429 14:29:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:09.429 14:29:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:09.429 14:29:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:19.431 Initializing NVMe Controllers 00:33:19.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:19.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:19.432 Initialization complete. Launching workers. 00:33:19.432 ======================================================== 00:33:19.432 Latency(us) 00:33:19.432 Device Information : IOPS MiB/s Average min max 00:33:19.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8779.00 4.29 3646.50 312.52 9986.09 00:33:19.432 ======================================================== 00:33:19.432 Total : 8779.00 4.29 3646.50 312.52 9986.09 00:33:19.432 00:33:19.432 14:29:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:19.432 14:29:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:29.428 Initializing NVMe Controllers 00:33:29.428 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:29.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:29.428 Initialization complete. Launching workers. 00:33:29.428 ======================================================== 00:33:29.428 Latency(us) 00:33:29.428 Device Information : IOPS MiB/s Average min max 00:33:29.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3838.80 479.85 8340.40 728.00 23670.10 00:33:29.428 ======================================================== 00:33:29.428 Total : 3838.80 479.85 8340.40 728.00 23670.10 00:33:29.428 00:33:29.428 14:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:29.428 14:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:29.428 14:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:39.536 Initializing NVMe Controllers 00:33:39.536 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:39.536 Controller IO queue size 128, less than required. 00:33:39.536 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:39.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:39.536 Initialization complete. Launching workers. 00:33:39.536 ======================================================== 00:33:39.536 Latency(us) 00:33:39.536 Device Information : IOPS MiB/s Average min max 00:33:39.536 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15771.22 7.70 8121.30 1441.65 22585.43 00:33:39.536 ======================================================== 00:33:39.536 Total : 15771.22 7.70 8121.30 1441.65 22585.43 00:33:39.536 00:33:39.536 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:39.536 14:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:51.762 Initializing NVMe Controllers 00:33:51.762 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:51.762 Controller IO queue size 128, less than required. 00:33:51.762 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:51.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:51.762 Initialization complete. Launching workers. 00:33:51.762 ======================================================== 00:33:51.762 Latency(us) 00:33:51.762 Device Information : IOPS MiB/s Average min max 00:33:51.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1195.15 149.39 107676.77 15878.70 241486.65 00:33:51.762 ======================================================== 00:33:51.762 Total : 1195.15 149.39 107676.77 15878.70 241486.65 00:33:51.762 00:33:51.762 14:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:51.762 14:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b998b58c-2323-44ef-912a-1e076fab0ffa 00:33:51.762 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:51.762 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fb922fc0-311b-4eaf-bd9b-abc970400f7a 00:33:51.762 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:52.024 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:33:52.024 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:33:52.024 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:52.024 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:33:52.024 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:52.024 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:33:52.024 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:52.024 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:52.024 rmmod nvme_tcp 00:33:52.024 rmmod nvme_fabrics 00:33:52.024 rmmod nvme_keyring 00:33:52.024 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:52.024 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:33:52.024 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:33:52.024 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 1882564 ']' 00:33:52.024 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 1882564 00:33:52.024 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1882564 ']' 00:33:52.024 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1882564 00:33:52.024 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:33:52.024 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:52.024 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1882564 00:33:52.286 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:52.286 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:52.286 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1882564' 00:33:52.286 killing process with pid 1882564 00:33:52.286 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1882564 00:33:52.286 14:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1882564 00:33:54.200 14:29:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:54.200 14:29:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:54.200 14:29:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:54.200 14:29:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:33:54.200 14:29:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:33:54.200 14:29:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:54.200 14:29:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:33:54.200 14:29:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:54.200 14:29:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:54.200 14:29:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.200 14:29:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:54.201 14:29:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.116 14:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:56.116 00:33:56.116 real 1m34.246s 00:33:56.116 user 5m32.062s 00:33:56.116 sys 0m15.905s 00:33:56.116 14:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:56.116 14:29:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:56.116 ************************************ 00:33:56.116 END TEST nvmf_perf 00:33:56.116 ************************************ 00:33:56.379 14:29:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:56.379 14:29:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:56.379 14:29:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:56.379 14:29:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.379 ************************************ 00:33:56.379 START TEST nvmf_fio_host 00:33:56.379 ************************************ 00:33:56.379 14:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:56.379 * Looking for test storage... 00:33:56.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:56.379 14:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:56.379 14:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:33:56.379 14:29:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:56.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.379 --rc genhtml_branch_coverage=1 00:33:56.379 --rc genhtml_function_coverage=1 00:33:56.379 --rc genhtml_legend=1 00:33:56.379 --rc geninfo_all_blocks=1 00:33:56.379 --rc geninfo_unexecuted_blocks=1 00:33:56.379 00:33:56.379 ' 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:56.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.379 --rc genhtml_branch_coverage=1 00:33:56.379 --rc genhtml_function_coverage=1 00:33:56.379 --rc genhtml_legend=1 00:33:56.379 --rc geninfo_all_blocks=1 00:33:56.379 --rc geninfo_unexecuted_blocks=1 00:33:56.379 00:33:56.379 ' 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:56.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.379 --rc genhtml_branch_coverage=1 00:33:56.379 --rc genhtml_function_coverage=1 00:33:56.379 --rc genhtml_legend=1 00:33:56.379 --rc geninfo_all_blocks=1 00:33:56.379 --rc geninfo_unexecuted_blocks=1 00:33:56.379 00:33:56.379 ' 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:56.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.379 --rc genhtml_branch_coverage=1 00:33:56.379 --rc genhtml_function_coverage=1 00:33:56.379 --rc genhtml_legend=1 00:33:56.379 --rc geninfo_all_blocks=1 00:33:56.379 --rc geninfo_unexecuted_blocks=1 00:33:56.379 00:33:56.379 ' 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:56.379 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:56.642 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:56.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:56.643 14:30:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.788 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:04.788 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:04.788 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:04.788 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:04.788 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:04.788 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:04.788 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:04.788 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:04.788 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:04.789 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:04.789 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:04.789 Found net devices under 0000:31:00.0: cvl_0_0 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:04.789 Found net devices under 0000:31:00.1: cvl_0_1 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:04.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:04.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:34:04.789 00:34:04.789 --- 10.0.0.2 ping statistics --- 00:34:04.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.789 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:04.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:04.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:34:04.789 00:34:04.789 --- 10.0.0.1 ping statistics --- 00:34:04.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.789 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:34:04.789 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:04.790 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.790 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1902613 00:34:04.790 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:04.790 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:04.790 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1902613 00:34:04.790 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1902613 ']' 00:34:04.790 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:04.790 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:04.790 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:04.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:04.790 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:04.790 14:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.790 [2024-10-13 14:30:07.933965] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:34:04.790 [2024-10-13 14:30:07.934030] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:04.790 [2024-10-13 14:30:08.075334] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:04.790 [2024-10-13 14:30:08.109420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:04.790 [2024-10-13 14:30:08.138007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:04.790 [2024-10-13 14:30:08.138049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:04.790 [2024-10-13 14:30:08.138058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:04.790 [2024-10-13 14:30:08.138073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:04.790 [2024-10-13 14:30:08.138079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:04.790 [2024-10-13 14:30:08.140167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:04.790 [2024-10-13 14:30:08.140287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:04.790 [2024-10-13 14:30:08.140519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.790 [2024-10-13 14:30:08.140519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:05.364 14:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:05.364 14:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:34:05.364 14:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:05.364 [2024-10-13 14:30:08.932163] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:05.364 14:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:34:05.364 14:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:05.364 14:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.364 14:30:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:34:05.626 Malloc1 00:34:05.626 14:30:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:05.888 14:30:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:06.149 14:30:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:06.149 [2024-10-13 14:30:09.813545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:06.149 14:30:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:06.411 14:30:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:07.009 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:07.009 fio-3.35 00:34:07.009 Starting 1 thread 00:34:09.594 00:34:09.594 test: (groupid=0, jobs=1): err= 0: pid=1903808: Sun Oct 13 14:30:12 2024 00:34:09.594 read: IOPS=13.8k, BW=53.9MiB/s (56.5MB/s)(108MiB/2005msec) 00:34:09.594 slat (usec): min=2, max=279, avg= 2.15, stdev= 2.36 00:34:09.594 clat (usec): min=3351, max=8939, avg=5112.05, stdev=375.80 00:34:09.594 lat (usec): min=3354, max=8941, avg=5114.21, stdev=375.95 00:34:09.594 clat percentiles (usec): 00:34:09.594 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:34:09.594 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:34:09.594 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:34:09.594 | 99.00th=[ 5932], 99.50th=[ 6390], 99.90th=[ 7898], 99.95th=[ 8160], 00:34:09.594 | 99.99th=[ 8848] 00:34:09.594 bw ( KiB/s): min=53912, max=55640, per=99.99%, avg=55156.00, stdev=833.88, samples=4 00:34:09.594 iops : min=13478, max=13910, avg=13789.00, stdev=208.47, samples=4 00:34:09.594 write: IOPS=13.8k, BW=53.8MiB/s (56.4MB/s)(108MiB/2005msec); 0 zone resets 00:34:09.594 slat (usec): min=2, max=267, avg= 2.22, stdev= 1.78 00:34:09.594 clat (usec): min=2772, max=8146, avg=4123.69, stdev=317.66 00:34:09.594 lat (usec): min=2774, max=8149, avg=4125.91, stdev=317.86 00:34:09.594 clat percentiles (usec): 00:34:09.594 | 1.00th=[ 3425], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3916], 00:34:09.594 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:34:09.594 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4555], 00:34:09.594 | 99.00th=[ 4817], 99.50th=[ 5407], 99.90th=[ 6652], 99.95th=[ 7242], 00:34:09.595 | 99.99th=[ 8029] 00:34:09.595 bw ( KiB/s): min=54216, max=55632, per=100.00%, avg=55122.00, stdev=622.51, samples=4 00:34:09.595 iops : min=13554, max=13908, avg=13780.50, stdev=155.63, samples=4 00:34:09.595 lat (msec) : 4=16.67%, 10=83.33% 00:34:09.595 cpu : usr=75.15%, sys=23.65%, ctx=35, majf=0, minf=19 00:34:09.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:34:09.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:09.595 issued rwts: total=27649,27617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:09.595 00:34:09.595 Run status group 0 (all jobs): 00:34:09.595 READ: bw=53.9MiB/s (56.5MB/s), 53.9MiB/s-53.9MiB/s (56.5MB/s-56.5MB/s), io=108MiB (113MB), run=2005-2005msec 00:34:09.595 WRITE: bw=53.8MiB/s (56.4MB/s), 53.8MiB/s-53.8MiB/s (56.4MB/s-56.4MB/s), io=108MiB (113MB), run=2005-2005msec 00:34:09.595 14:30:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:34:09.595 14:30:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:34:09.595 14:30:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:09.595 14:30:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:09.595 14:30:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:09.595 14:30:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:09.595 14:30:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:34:09.595 14:30:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:09.595 14:30:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:09.595 14:30:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:09.595 14:30:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:34:09.595 14:30:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:09.595 14:30:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:09.595 14:30:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:09.595 14:30:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:09.595 14:30:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:09.595 14:30:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:09.595 14:30:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:09.595 14:30:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:09.595 14:30:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:09.595 14:30:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:09.595 14:30:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:34:09.854 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:34:09.854 fio-3.35 00:34:09.854 Starting 1 thread 00:34:12.384 00:34:12.384 test: (groupid=0, jobs=1): err= 0: pid=1904602: Sun Oct 13 14:30:15 2024 00:34:12.384 read: IOPS=9913, BW=155MiB/s (162MB/s)(311MiB/2005msec) 00:34:12.384 slat (usec): min=3, max=110, avg= 3.60, stdev= 1.56 00:34:12.384 clat (usec): min=1739, max=14544, avg=7714.74, stdev=1848.63 00:34:12.384 lat (usec): min=1743, max=14561, avg=7718.34, stdev=1848.80 00:34:12.384 clat percentiles (usec): 00:34:12.384 | 1.00th=[ 4015], 5.00th=[ 5014], 10.00th=[ 5473], 20.00th=[ 6063], 00:34:12.384 | 30.00th=[ 6587], 40.00th=[ 7111], 50.00th=[ 7570], 60.00th=[ 8094], 00:34:12.384 | 70.00th=[ 8717], 80.00th=[ 9503], 90.00th=[10159], 95.00th=[10552], 00:34:12.384 | 99.00th=[12387], 99.50th=[12911], 99.90th=[14091], 99.95th=[14222], 00:34:12.384 | 99.99th=[14484] 00:34:12.384 bw ( KiB/s): min=71136, max=87712, per=50.21%, avg=79640.00, stdev=6809.73, samples=4 00:34:12.384 iops : min= 4446, max= 5482, avg=4977.50, stdev=425.61, samples=4 00:34:12.384 write: IOPS=5973, BW=93.3MiB/s (97.9MB/s)(163MiB/1746msec); 0 zone resets 00:34:12.384 slat (usec): min=39, max=456, avg=40.99, stdev= 8.19 00:34:12.384 clat (usec): min=2803, max=16493, avg=8840.26, stdev=1361.01 00:34:12.384 lat (usec): min=2843, max=16625, avg=8881.25, stdev=1363.39 00:34:12.384 clat percentiles (usec): 00:34:12.384 | 1.00th=[ 5932], 5.00th=[ 6849], 10.00th=[ 7242], 20.00th=[ 7701], 00:34:12.384 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9110], 00:34:12.384 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10421], 95.00th=[10945], 00:34:12.384 | 99.00th=[12256], 99.50th=[13173], 99.90th=[15926], 99.95th=[16319], 00:34:12.384 | 99.99th=[16450] 00:34:12.384 bw ( KiB/s): min=75840, max=90976, per=86.96%, avg=83112.00, stdev=6365.37, samples=4 00:34:12.384 iops : min= 4740, max= 5686, avg=5194.50, stdev=397.84, samples=4 00:34:12.384 lat (msec) : 2=0.01%, 4=0.63%, 10=83.89%, 20=15.47% 00:34:12.384 cpu : usr=90.62%, sys=8.53%, ctx=17, majf=0, minf=41 00:34:12.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:34:12.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:12.384 issued rwts: total=19877,10430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:12.384 00:34:12.384 Run status group 0 (all jobs): 00:34:12.384 READ: bw=155MiB/s (162MB/s), 155MiB/s-155MiB/s (162MB/s-162MB/s), io=311MiB (326MB), run=2005-2005msec 00:34:12.384 WRITE: bw=93.3MiB/s (97.9MB/s), 93.3MiB/s-93.3MiB/s (97.9MB/s-97.9MB/s), io=163MiB (171MB), run=1746-1746msec 00:34:12.384 14:30:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:12.384 14:30:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:34:12.384 14:30:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:34:12.384 14:30:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:34:12.384 14:30:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:34:12.384 14:30:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:34:12.385 14:30:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:12.385 14:30:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:12.385 14:30:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:34:12.385 14:30:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:34:12.385 14:30:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:34:12.385 14:30:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:34:12.953 Nvme0n1 00:34:12.953 14:30:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:34:13.521 14:30:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=9370357b-1771-48e2-82e8-4ee851677231 00:34:13.521 14:30:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 9370357b-1771-48e2-82e8-4ee851677231 00:34:13.521 14:30:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=9370357b-1771-48e2-82e8-4ee851677231 00:34:13.521 14:30:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:34:13.521 14:30:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:34:13.521 14:30:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:34:13.521 14:30:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:13.779 14:30:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:34:13.779 { 00:34:13.779 "uuid": "9370357b-1771-48e2-82e8-4ee851677231", 00:34:13.779 "name": "lvs_0", 00:34:13.779 "base_bdev": "Nvme0n1", 00:34:13.779 "total_data_clusters": 1787, 00:34:13.779 "free_clusters": 1787, 00:34:13.779 "block_size": 512, 00:34:13.779 "cluster_size": 1073741824 00:34:13.779 } 00:34:13.779 ]' 00:34:13.779 14:30:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="9370357b-1771-48e2-82e8-4ee851677231") .free_clusters' 00:34:13.779 14:30:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1787 00:34:13.779 14:30:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="9370357b-1771-48e2-82e8-4ee851677231") .cluster_size' 00:34:13.779 14:30:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:34:13.779 14:30:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1829888 00:34:13.779 14:30:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1829888 00:34:13.779 1829888 00:34:13.779 14:30:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:34:14.038 ed55d9d0-1706-4e70-96f5-69eb94c544d3 00:34:14.038 14:30:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:34:14.298 14:30:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:34:14.298 14:30:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:14.557 14:30:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:15.122 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:15.122 fio-3.35 00:34:15.122 Starting 1 thread 00:34:17.651 00:34:17.651 test: (groupid=0, jobs=1): err= 0: pid=1905753: Sun Oct 13 14:30:21 2024 00:34:17.651 read: IOPS=10.3k, BW=40.2MiB/s (42.1MB/s)(80.6MiB/2006msec) 00:34:17.651 slat (usec): min=2, max=111, avg= 2.21, stdev= 1.09 00:34:17.651 clat (usec): min=1915, max=11454, avg=6858.27, stdev=510.47 00:34:17.651 lat (usec): min=1933, max=11456, avg=6860.48, stdev=510.42 00:34:17.651 clat percentiles (usec): 00:34:17.651 | 1.00th=[ 5735], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6456], 00:34:17.651 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:34:17.651 | 70.00th=[ 7111], 80.00th=[ 7242], 90.00th=[ 7504], 95.00th=[ 7635], 00:34:17.651 | 99.00th=[ 7963], 99.50th=[ 8160], 99.90th=[ 9110], 99.95th=[10552], 00:34:17.651 | 99.99th=[11338] 00:34:17.651 bw ( KiB/s): min=40064, max=41824, per=100.00%, avg=41158.00, stdev=760.00, samples=4 00:34:17.651 iops : min=10016, max=10456, avg=10289.50, stdev=190.00, samples=4 00:34:17.651 write: IOPS=10.3k, BW=40.2MiB/s (42.2MB/s)(80.7MiB/2006msec); 0 zone resets 00:34:17.651 slat (nsec): min=2082, max=111985, avg=2286.28, stdev=818.34 00:34:17.651 clat (usec): min=1041, max=10710, avg=5486.78, stdev=451.06 00:34:17.651 lat (usec): min=1048, max=10712, avg=5489.06, stdev=451.04 00:34:17.651 clat percentiles (usec): 00:34:17.651 | 1.00th=[ 4490], 5.00th=[ 4817], 10.00th=[ 4948], 20.00th=[ 5145], 00:34:17.651 | 30.00th=[ 5276], 40.00th=[ 5407], 50.00th=[ 5473], 60.00th=[ 5604], 00:34:17.651 | 70.00th=[ 5669], 80.00th=[ 5800], 90.00th=[ 5997], 95.00th=[ 6128], 00:34:17.651 | 99.00th=[ 6456], 99.50th=[ 6587], 99.90th=[ 8979], 99.95th=[ 9765], 00:34:17.651 | 99.99th=[10683] 00:34:17.651 bw ( KiB/s): min=40616, max=41600, per=99.98%, avg=41204.00, stdev=459.12, samples=4 00:34:17.651 iops : min=10154, max=10400, avg=10301.00, stdev=114.78, samples=4 00:34:17.651 lat (msec) : 2=0.02%, 4=0.11%, 10=99.82%, 20=0.05% 00:34:17.651 cpu : usr=70.02%, sys=28.98%, ctx=42, majf=0, minf=28 00:34:17.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:34:17.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:17.651 issued rwts: total=20640,20668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:17.651 00:34:17.651 Run status group 0 (all jobs): 00:34:17.651 READ: bw=40.2MiB/s (42.1MB/s), 40.2MiB/s-40.2MiB/s (42.1MB/s-42.1MB/s), io=80.6MiB (84.5MB), run=2006-2006msec 00:34:17.651 WRITE: bw=40.2MiB/s (42.2MB/s), 40.2MiB/s-40.2MiB/s (42.2MB/s-42.2MB/s), io=80.7MiB (84.7MB), run=2006-2006msec 00:34:17.651 14:30:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:17.651 14:30:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:34:18.585 14:30:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=50e973c9-3943-42df-979d-a7f428ddfa93 00:34:18.585 14:30:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 50e973c9-3943-42df-979d-a7f428ddfa93 00:34:18.585 14:30:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=50e973c9-3943-42df-979d-a7f428ddfa93 00:34:18.585 14:30:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:34:18.585 14:30:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:34:18.585 14:30:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:34:18.585 14:30:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:18.585 14:30:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:34:18.585 { 00:34:18.585 "uuid": "9370357b-1771-48e2-82e8-4ee851677231", 00:34:18.585 "name": "lvs_0", 00:34:18.585 "base_bdev": "Nvme0n1", 00:34:18.585 "total_data_clusters": 1787, 00:34:18.585 "free_clusters": 0, 00:34:18.585 "block_size": 512, 00:34:18.585 "cluster_size": 1073741824 00:34:18.585 }, 00:34:18.585 { 00:34:18.585 "uuid": "50e973c9-3943-42df-979d-a7f428ddfa93", 00:34:18.585 "name": "lvs_n_0", 00:34:18.585 "base_bdev": "ed55d9d0-1706-4e70-96f5-69eb94c544d3", 00:34:18.585 "total_data_clusters": 457025, 00:34:18.585 "free_clusters": 457025, 00:34:18.585 "block_size": 512, 00:34:18.585 "cluster_size": 4194304 00:34:18.585 } 00:34:18.585 ]' 00:34:18.585 14:30:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="50e973c9-3943-42df-979d-a7f428ddfa93") .free_clusters' 00:34:18.843 14:30:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=457025 00:34:18.843 14:30:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="50e973c9-3943-42df-979d-a7f428ddfa93") .cluster_size' 00:34:18.843 14:30:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:34:18.843 14:30:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1828100 00:34:18.843 14:30:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1828100 00:34:18.843 1828100 00:34:18.843 14:30:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:34:19.777 251a0949-32fa-482c-acff-7d5f041e6ffe 00:34:19.777 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:34:19.777 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:34:20.035 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:34:20.035 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:20.035 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:20.035 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:20.035 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:20.035 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:20.035 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:20.035 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:34:20.035 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:20.035 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:20.035 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:20.035 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:34:20.035 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:20.035 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:20.035 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:20.035 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:20.035 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:20.035 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:20.035 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:20.327 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:20.327 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:20.327 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:20.327 14:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:20.588 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:20.588 fio-3.35 00:34:20.588 Starting 1 thread 00:34:23.126 00:34:23.126 test: (groupid=0, jobs=1): err= 0: pid=1907007: Sun Oct 13 14:30:26 2024 00:34:23.126 read: IOPS=9170, BW=35.8MiB/s (37.6MB/s)(71.9MiB/2006msec) 00:34:23.126 slat (usec): min=2, max=110, avg= 2.23, stdev= 1.15 00:34:23.126 clat (usec): min=2111, max=12668, avg=7718.96, stdev=604.55 00:34:23.126 lat (usec): min=2129, max=12671, avg=7721.19, stdev=604.49 00:34:23.126 clat percentiles (usec): 00:34:23.126 | 1.00th=[ 6325], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7242], 00:34:23.126 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7898], 00:34:23.126 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8586], 00:34:23.126 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[11076], 99.95th=[12125], 00:34:23.126 | 99.99th=[12649] 00:34:23.126 bw ( KiB/s): min=35792, max=37224, per=99.91%, avg=36652.00, stdev=608.02, samples=4 00:34:23.126 iops : min= 8948, max= 9306, avg=9163.00, stdev=152.00, samples=4 00:34:23.126 write: IOPS=9181, BW=35.9MiB/s (37.6MB/s)(71.9MiB/2006msec); 0 zone resets 00:34:23.126 slat (usec): min=2, max=136, avg= 2.30, stdev= 1.04 00:34:23.126 clat (usec): min=1077, max=11092, avg=6161.15, stdev=514.65 00:34:23.126 lat (usec): min=1084, max=11095, avg=6163.45, stdev=514.63 00:34:23.126 clat percentiles (usec): 00:34:23.126 | 1.00th=[ 4948], 5.00th=[ 5407], 10.00th=[ 5538], 20.00th=[ 5735], 00:34:23.126 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6259], 00:34:23.126 | 70.00th=[ 6390], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6915], 00:34:23.126 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[ 9110], 99.95th=[10159], 00:34:23.126 | 99.99th=[11076] 00:34:23.126 bw ( KiB/s): min=36608, max=36864, per=100.00%, avg=36724.00, stdev=127.58, samples=4 00:34:23.126 iops : min= 9152, max= 9216, avg=9181.00, stdev=31.90, samples=4 00:34:23.126 lat (msec) : 2=0.01%, 4=0.10%, 10=99.77%, 20=0.12% 00:34:23.126 cpu : usr=72.67%, sys=26.38%, ctx=61, majf=0, minf=28 00:34:23.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:34:23.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:23.126 issued rwts: total=18397,18418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:23.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:23.126 00:34:23.126 Run status group 0 (all jobs): 00:34:23.126 READ: bw=35.8MiB/s (37.6MB/s), 35.8MiB/s-35.8MiB/s (37.6MB/s-37.6MB/s), io=71.9MiB (75.4MB), run=2006-2006msec 00:34:23.126 WRITE: bw=35.9MiB/s (37.6MB/s), 35.9MiB/s-35.9MiB/s (37.6MB/s-37.6MB/s), io=71.9MiB (75.4MB), run=2006-2006msec 00:34:23.126 14:30:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:34:23.385 14:30:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:34:23.385 14:30:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:34:25.289 14:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:34:25.289 14:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:34:25.860 14:30:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:34:26.120 14:30:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:34:28.030 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:28.030 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:34:28.030 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:34:28.030 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:28.030 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:34:28.030 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:28.030 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:34:28.030 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:28.030 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:28.030 rmmod nvme_tcp 00:34:28.030 rmmod nvme_fabrics 00:34:28.030 rmmod nvme_keyring 00:34:28.030 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:28.030 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:34:28.030 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:34:28.030 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 1902613 ']' 00:34:28.030 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 1902613 00:34:28.030 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1902613 ']' 00:34:28.030 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1902613 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1902613 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1902613' 00:34:28.291 killing process with pid 1902613 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1902613 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1902613 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:28.291 14:30:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.836 14:30:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:30.836 00:34:30.836 real 0m34.114s 00:34:30.836 user 2m42.503s 00:34:30.836 sys 0m10.308s 00:34:30.836 14:30:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:30.836 14:30:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.836 ************************************ 00:34:30.836 END TEST nvmf_fio_host 00:34:30.836 ************************************ 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.836 ************************************ 00:34:30.836 START TEST nvmf_failover 00:34:30.836 ************************************ 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:34:30.836 * Looking for test storage... 00:34:30.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:30.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.836 --rc genhtml_branch_coverage=1 00:34:30.836 --rc genhtml_function_coverage=1 00:34:30.836 --rc genhtml_legend=1 00:34:30.836 --rc geninfo_all_blocks=1 00:34:30.836 --rc geninfo_unexecuted_blocks=1 00:34:30.836 00:34:30.836 ' 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:30.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.836 --rc genhtml_branch_coverage=1 00:34:30.836 --rc genhtml_function_coverage=1 00:34:30.836 --rc genhtml_legend=1 00:34:30.836 --rc geninfo_all_blocks=1 00:34:30.836 --rc geninfo_unexecuted_blocks=1 00:34:30.836 00:34:30.836 ' 00:34:30.836 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:30.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.837 --rc genhtml_branch_coverage=1 00:34:30.837 --rc genhtml_function_coverage=1 00:34:30.837 --rc genhtml_legend=1 00:34:30.837 --rc geninfo_all_blocks=1 00:34:30.837 --rc geninfo_unexecuted_blocks=1 00:34:30.837 00:34:30.837 ' 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:30.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.837 --rc genhtml_branch_coverage=1 00:34:30.837 --rc genhtml_function_coverage=1 00:34:30.837 --rc genhtml_legend=1 00:34:30.837 --rc geninfo_all_blocks=1 00:34:30.837 --rc geninfo_unexecuted_blocks=1 00:34:30.837 00:34:30.837 ' 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:30.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:34:30.837 14:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:38.984 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:38.984 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:38.984 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:38.984 Found net devices under 0000:31:00.0: cvl_0_0 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:38.985 Found net devices under 0000:31:00.1: cvl_0_1 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:38.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:38.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:34:38.985 00:34:38.985 --- 10.0.0.2 ping statistics --- 00:34:38.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.985 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:38.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:38.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:34:38.985 00:34:38.985 --- 10.0.0.1 ping statistics --- 00:34:38.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.985 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=1912506 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 1912506 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1912506 ']' 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:38.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:38.985 14:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:38.985 [2024-10-13 14:30:42.031202] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:34:38.985 [2024-10-13 14:30:42.031266] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:38.985 [2024-10-13 14:30:42.173363] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:38.985 [2024-10-13 14:30:42.221116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:38.985 [2024-10-13 14:30:42.248354] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:38.985 [2024-10-13 14:30:42.248398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:38.985 [2024-10-13 14:30:42.248407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:38.985 [2024-10-13 14:30:42.248414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:38.985 [2024-10-13 14:30:42.248421] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:38.985 [2024-10-13 14:30:42.250396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:38.985 [2024-10-13 14:30:42.250620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:38.985 [2024-10-13 14:30:42.250621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:39.247 14:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:39.247 14:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:34:39.247 14:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:39.247 14:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:39.247 14:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:39.247 14:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:39.247 14:30:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:39.508 [2024-10-13 14:30:43.067224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:39.508 14:30:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:39.769 Malloc0 00:34:39.769 14:30:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:40.031 14:30:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:40.032 14:30:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:40.293 [2024-10-13 14:30:43.879595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:40.293 14:30:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:40.611 [2024-10-13 14:30:44.079758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:40.611 14:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:40.611 [2024-10-13 14:30:44.280025] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:40.964 14:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1913090 00:34:40.964 14:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:34:40.964 14:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:40.964 14:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1913090 /var/tmp/bdevperf.sock 00:34:40.964 14:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1913090 ']' 00:34:40.964 14:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:40.964 14:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:40.964 14:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:40.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:40.964 14:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:40.964 14:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:41.554 14:30:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:41.554 14:30:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:34:41.554 14:30:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:41.815 NVMe0n1 00:34:42.076 14:30:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:42.338 00:34:42.338 14:30:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1913306 00:34:42.338 14:30:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:42.338 14:30:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:34:43.280 14:30:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:43.541 [2024-10-13 14:30:46.994689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 [2024-10-13 14:30:46.994844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d900 is same with the state(6) to be set 00:34:43.541 14:30:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:34:46.845 14:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:46.845 00:34:46.845 14:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:47.107 [2024-10-13 14:30:50.612370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.108 [2024-10-13 14:30:50.612824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 [2024-10-13 14:30:50.612994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252e5b0 is same with the state(6) to be set 00:34:47.109 14:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:34:50.408 14:30:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:50.408 [2024-10-13 14:30:53.802612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:50.408 14:30:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:34:51.352 14:30:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:51.352 [2024-10-13 14:30:54.994933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.994968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.994973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.994978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.994983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.994988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.994993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.994997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.352 [2024-10-13 14:30:54.995227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.353 [2024-10-13 14:30:54.995232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.353 [2024-10-13 14:30:54.995237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252f360 is same with the state(6) to be set 00:34:51.353 14:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1913306 00:34:57.940 { 00:34:57.940 "results": [ 00:34:57.940 { 00:34:57.940 "job": "NVMe0n1", 00:34:57.940 "core_mask": "0x1", 00:34:57.940 "workload": "verify", 00:34:57.940 "status": "finished", 00:34:57.940 "verify_range": { 00:34:57.940 "start": 0, 00:34:57.940 "length": 16384 00:34:57.940 }, 00:34:57.940 "queue_depth": 128, 00:34:57.940 "io_size": 4096, 00:34:57.940 "runtime": 15.003639, 00:34:57.940 "iops": 12368.39942629918, 00:34:57.940 "mibps": 48.31406025898117, 00:34:57.940 "io_failed": 13077, 00:34:57.940 "io_timeout": 0, 00:34:57.940 "avg_latency_us": 9645.773911158309, 00:34:57.940 "min_latency_us": 543.9893083862346, 00:34:57.940 "max_latency_us": 21458.496491814232 00:34:57.940 } 00:34:57.940 ], 00:34:57.940 "core_count": 1 00:34:57.940 } 00:34:57.940 14:31:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1913090 00:34:57.940 14:31:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1913090 ']' 00:34:57.940 14:31:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1913090 00:34:57.940 14:31:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:34:57.940 14:31:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:57.940 14:31:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1913090 00:34:57.940 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:57.940 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:57.940 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1913090' 00:34:57.940 killing process with pid 1913090 00:34:57.940 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1913090 00:34:57.940 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1913090 00:34:57.941 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:57.941 [2024-10-13 14:30:44.369779] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:34:57.941 [2024-10-13 14:30:44.369860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1913090 ] 00:34:57.941 [2024-10-13 14:30:44.504632] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:57.941 [2024-10-13 14:30:44.554840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.941 [2024-10-13 14:30:44.577686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:57.941 Running I/O for 15 seconds... 00:34:57.941 11696.00 IOPS, 45.69 MiB/s [2024-10-13T12:31:01.648Z] [2024-10-13 14:30:46.996161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.941 [2024-10-13 14:30:46.996816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.941 [2024-10-13 14:30:46.996825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.996835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.996842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.996852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.996859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.996869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.996876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.996886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.996894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.996904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.996912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.996921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.996929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.996939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.996947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.996956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.996964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.996973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.996981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.996990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.996997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.942 [2024-10-13 14:30:46.997532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.942 [2024-10-13 14:30:46.997541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.943 [2024-10-13 14:30:46.997906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.943 [2024-10-13 14:30:46.997924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.943 [2024-10-13 14:30:46.997941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.943 [2024-10-13 14:30:46.997959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.943 [2024-10-13 14:30:46.997976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.997985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.943 [2024-10-13 14:30:46.997992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.998002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.943 [2024-10-13 14:30:46.998009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.998019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.943 [2024-10-13 14:30:46.998026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.998035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.943 [2024-10-13 14:30:46.998042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.998051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.943 [2024-10-13 14:30:46.998059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.998071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.943 [2024-10-13 14:30:46.998079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.998088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.943 [2024-10-13 14:30:46.998095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.998107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.943 [2024-10-13 14:30:46.998114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.998124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.943 [2024-10-13 14:30:46.998131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.998140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.943 [2024-10-13 14:30:46.998148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.998157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.943 [2024-10-13 14:30:46.998164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.998174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.943 [2024-10-13 14:30:46.998181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.998190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.943 [2024-10-13 14:30:46.998197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.998207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.943 [2024-10-13 14:30:46.998214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.998223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.943 [2024-10-13 14:30:46.998231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.943 [2024-10-13 14:30:46.998240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.944 [2024-10-13 14:30:46.998247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:46.998256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.944 [2024-10-13 14:30:46.998267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:46.998277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.944 [2024-10-13 14:30:46.998285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:46.998294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.944 [2024-10-13 14:30:46.998301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:46.998310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.944 [2024-10-13 14:30:46.998319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:46.998329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.944 [2024-10-13 14:30:46.998336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:46.998345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.944 [2024-10-13 14:30:46.998352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:46.998362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.944 [2024-10-13 14:30:46.998369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:46.998389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.944 [2024-10-13 14:30:46.998396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.944 [2024-10-13 14:30:46.998402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102032 len:8 PRP1 0x0 PRP2 0x0 00:34:57.944 [2024-10-13 14:30:46.998411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:46.998446] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1924bf0 was disconnected and freed. reset controller. 00:34:57.944 [2024-10-13 14:30:46.998455] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:57.944 [2024-10-13 14:30:46.998475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.944 [2024-10-13 14:30:46.998483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:46.998492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.944 [2024-10-13 14:30:46.998500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:46.998508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.944 [2024-10-13 14:30:46.998515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:46.998524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.944 [2024-10-13 14:30:46.998531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:46.998539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:57.944 [2024-10-13 14:30:47.002077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:57.944 [2024-10-13 14:30:47.002099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1903cc0 (9): Bad file descriptor 00:34:57.944 [2024-10-13 14:30:47.044250] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:57.944 11341.00 IOPS, 44.30 MiB/s [2024-10-13T12:31:01.651Z] 11261.33 IOPS, 43.99 MiB/s [2024-10-13T12:31:01.651Z] 11595.50 IOPS, 45.29 MiB/s [2024-10-13T12:31:01.651Z] [2024-10-13 14:30:50.615006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.944 [2024-10-13 14:30:50.615040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.944 [2024-10-13 14:30:50.615058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.944 [2024-10-13 14:30:50.615075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.944 [2024-10-13 14:30:50.615087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.944 [2024-10-13 14:30:50.615099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.944 [2024-10-13 14:30:50.615110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.944 [2024-10-13 14:30:50.615122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.944 [2024-10-13 14:30:50.615134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.944 [2024-10-13 14:30:50.615145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.944 [2024-10-13 14:30:50.615157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.944 [2024-10-13 14:30:50.615168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.944 [2024-10-13 14:30:50.615180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.944 [2024-10-13 14:30:50.615192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.944 [2024-10-13 14:30:50.615205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.944 [2024-10-13 14:30:50.615217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.944 [2024-10-13 14:30:50.615229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.944 [2024-10-13 14:30:50.615241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.944 [2024-10-13 14:30:50.615252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.944 [2024-10-13 14:30:50.615263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.944 [2024-10-13 14:30:50.615276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.944 [2024-10-13 14:30:50.615287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.944 [2024-10-13 14:30:50.615299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.944 [2024-10-13 14:30:50.615310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.944 [2024-10-13 14:30:50.615316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.945 [2024-10-13 14:30:50.615679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.945 [2024-10-13 14:30:50.615685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.615991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.615996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.616003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.616008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.616014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.616019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.616025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.616030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.616037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.616042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.616048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.616053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.616059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.616069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.616075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.616080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.616088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.616093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.616099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.616104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.616110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.946 [2024-10-13 14:30:50.616115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.616131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.946 [2024-10-13 14:30:50.616137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60760 len:8 PRP1 0x0 PRP2 0x0 00:34:57.946 [2024-10-13 14:30:50.616142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.616150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.946 [2024-10-13 14:30:50.616154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.946 [2024-10-13 14:30:50.616158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60768 len:8 PRP1 0x0 PRP2 0x0 00:34:57.946 [2024-10-13 14:30:50.616163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.946 [2024-10-13 14:30:50.616168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.946 [2024-10-13 14:30:50.616172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60776 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60784 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60792 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60800 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60808 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60816 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60824 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60832 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60840 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60848 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60856 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60864 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60872 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60880 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60888 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60896 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60904 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60912 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60920 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60928 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60936 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.616575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.616579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.616583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60944 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.616588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.628695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.628721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.628731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60952 len:8 PRP1 0x0 PRP2 0x0 00:34:57.947 [2024-10-13 14:30:50.628740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.947 [2024-10-13 14:30:50.628747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.947 [2024-10-13 14:30:50.628753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.947 [2024-10-13 14:30:50.628759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60960 len:8 PRP1 0x0 PRP2 0x0 00:34:57.948 [2024-10-13 14:30:50.628766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:50.628773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.948 [2024-10-13 14:30:50.628778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.948 [2024-10-13 14:30:50.628784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60968 len:8 PRP1 0x0 PRP2 0x0 00:34:57.948 [2024-10-13 14:30:50.628791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:50.628798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.948 [2024-10-13 14:30:50.628803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.948 [2024-10-13 14:30:50.628809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60976 len:8 PRP1 0x0 PRP2 0x0 00:34:57.948 [2024-10-13 14:30:50.628815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:50.628822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.948 [2024-10-13 14:30:50.628827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.948 [2024-10-13 14:30:50.628833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60984 len:8 PRP1 0x0 PRP2 0x0 00:34:57.948 [2024-10-13 14:30:50.628840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:50.628847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.948 [2024-10-13 14:30:50.628857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.948 [2024-10-13 14:30:50.628863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60992 len:8 PRP1 0x0 PRP2 0x0 00:34:57.948 [2024-10-13 14:30:50.628869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:50.628876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.948 [2024-10-13 14:30:50.628882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.948 [2024-10-13 14:30:50.628888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61000 len:8 PRP1 0x0 PRP2 0x0 00:34:57.948 [2024-10-13 14:30:50.628894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:50.628902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.948 [2024-10-13 14:30:50.628907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.948 [2024-10-13 14:30:50.628913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61008 len:8 PRP1 0x0 PRP2 0x0 00:34:57.948 [2024-10-13 14:30:50.628919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:50.628926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.948 [2024-10-13 14:30:50.628931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.948 [2024-10-13 14:30:50.628938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61016 len:8 PRP1 0x0 PRP2 0x0 00:34:57.948 [2024-10-13 14:30:50.628944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:50.628952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.948 [2024-10-13 14:30:50.628957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.948 [2024-10-13 14:30:50.628962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61024 len:8 PRP1 0x0 PRP2 0x0 00:34:57.948 [2024-10-13 14:30:50.628969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:50.628977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.948 [2024-10-13 14:30:50.628982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.948 [2024-10-13 14:30:50.628988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61032 len:8 PRP1 0x0 PRP2 0x0 00:34:57.948 [2024-10-13 14:30:50.628995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:50.629002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.948 [2024-10-13 14:30:50.629007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.948 [2024-10-13 14:30:50.629013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61040 len:8 PRP1 0x0 PRP2 0x0 00:34:57.948 [2024-10-13 14:30:50.629020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:50.629058] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1926940 was disconnected and freed. reset controller. 00:34:57.948 [2024-10-13 14:30:50.629074] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:34:57.948 [2024-10-13 14:30:50.629102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.948 [2024-10-13 14:30:50.629115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:50.629125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.948 [2024-10-13 14:30:50.629132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:50.629140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.948 [2024-10-13 14:30:50.629147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:50.629154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.948 [2024-10-13 14:30:50.629161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:50.629168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:57.948 [2024-10-13 14:30:50.629207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1903cc0 (9): Bad file descriptor 00:34:57.948 [2024-10-13 14:30:50.632434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:57.948 [2024-10-13 14:30:50.790492] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:57.948 11383.20 IOPS, 44.47 MiB/s [2024-10-13T12:31:01.655Z] 11648.33 IOPS, 45.50 MiB/s [2024-10-13T12:31:01.655Z] 11818.43 IOPS, 46.17 MiB/s [2024-10-13T12:31:01.655Z] 11947.25 IOPS, 46.67 MiB/s [2024-10-13T12:31:01.655Z] 12082.33 IOPS, 47.20 MiB/s [2024-10-13T12:31:01.655Z] [2024-10-13 14:30:54.995411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:37488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.948 [2024-10-13 14:30:54.995440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:54.995453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.948 [2024-10-13 14:30:54.995459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:54.995466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.948 [2024-10-13 14:30:54.995471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:54.995478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.948 [2024-10-13 14:30:54.995483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:54.995490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:37520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.948 [2024-10-13 14:30:54.995495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:54.995503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.948 [2024-10-13 14:30:54.995508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:54.995514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.948 [2024-10-13 14:30:54.995520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:54.995531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.948 [2024-10-13 14:30:54.995536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:54.995543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.948 [2024-10-13 14:30:54.995548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:54.995554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.948 [2024-10-13 14:30:54.995559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:54.995566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.948 [2024-10-13 14:30:54.995571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:54.995578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.948 [2024-10-13 14:30:54.995583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:54.995589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:37584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.948 [2024-10-13 14:30:54.995594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:54.995601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.948 [2024-10-13 14:30:54.995606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:54.995612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.948 [2024-10-13 14:30:54.995617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:54.995624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.948 [2024-10-13 14:30:54.995629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.948 [2024-10-13 14:30:54.995635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.995923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.949 [2024-10-13 14:30:54.995934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.949 [2024-10-13 14:30:54.995946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.949 [2024-10-13 14:30:54.995957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.949 [2024-10-13 14:30:54.995969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.949 [2024-10-13 14:30:54.995981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.949 [2024-10-13 14:30:54.995992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.995999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.949 [2024-10-13 14:30:54.996005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.949 [2024-10-13 14:30:54.996011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.949 [2024-10-13 14:30:54.996016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.950 [2024-10-13 14:30:54.996028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.950 [2024-10-13 14:30:54.996039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.950 [2024-10-13 14:30:54.996051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.950 [2024-10-13 14:30:54.996066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.950 [2024-10-13 14:30:54.996078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.950 [2024-10-13 14:30:54.996089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.950 [2024-10-13 14:30:54.996101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.950 [2024-10-13 14:30:54.996113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.950 [2024-10-13 14:30:54.996126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.950 [2024-10-13 14:30:54.996137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.950 [2024-10-13 14:30:54.996149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.950 [2024-10-13 14:30:54.996479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.950 [2024-10-13 14:30:54.996484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.951 [2024-10-13 14:30:54.996495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.951 [2024-10-13 14:30:54.996507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.951 [2024-10-13 14:30:54.996518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.951 [2024-10-13 14:30:54.996529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.951 [2024-10-13 14:30:54.996541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.951 [2024-10-13 14:30:54.996552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.951 [2024-10-13 14:30:54.996564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.951 [2024-10-13 14:30:54.996576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.951 [2024-10-13 14:30:54.996588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.951 [2024-10-13 14:30:54.996599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.951 [2024-10-13 14:30:54.996612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.951 [2024-10-13 14:30:54.996623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.951 [2024-10-13 14:30:54.996634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.951 [2024-10-13 14:30:54.996646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.951 [2024-10-13 14:30:54.996657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.951 [2024-10-13 14:30:54.996669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.951 [2024-10-13 14:30:54.996680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.951 [2024-10-13 14:30:54.996691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.951 [2024-10-13 14:30:54.996706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.951 [2024-10-13 14:30:54.996731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38400 len:8 PRP1 0x0 PRP2 0x0 00:34:57.951 [2024-10-13 14:30:54.996736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.951 [2024-10-13 14:30:54.996747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.951 [2024-10-13 14:30:54.996752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38408 len:8 PRP1 0x0 PRP2 0x0 00:34:57.951 [2024-10-13 14:30:54.996757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.951 [2024-10-13 14:30:54.996766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.951 [2024-10-13 14:30:54.996770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38416 len:8 PRP1 0x0 PRP2 0x0 00:34:57.951 [2024-10-13 14:30:54.996775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.951 [2024-10-13 14:30:54.996785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.951 [2024-10-13 14:30:54.996789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38424 len:8 PRP1 0x0 PRP2 0x0 00:34:57.951 [2024-10-13 14:30:54.996794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.951 [2024-10-13 14:30:54.996803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.951 [2024-10-13 14:30:54.996807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38432 len:8 PRP1 0x0 PRP2 0x0 00:34:57.951 [2024-10-13 14:30:54.996813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.951 [2024-10-13 14:30:54.996822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.951 [2024-10-13 14:30:54.996826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38440 len:8 PRP1 0x0 PRP2 0x0 00:34:57.951 [2024-10-13 14:30:54.996831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.951 [2024-10-13 14:30:54.996840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.951 [2024-10-13 14:30:54.996844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38448 len:8 PRP1 0x0 PRP2 0x0 00:34:57.951 [2024-10-13 14:30:54.996849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.951 [2024-10-13 14:30:54.996858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.951 [2024-10-13 14:30:54.996862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38456 len:8 PRP1 0x0 PRP2 0x0 00:34:57.951 [2024-10-13 14:30:54.996867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.951 [2024-10-13 14:30:54.996877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.951 [2024-10-13 14:30:54.996882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38464 len:8 PRP1 0x0 PRP2 0x0 00:34:57.951 [2024-10-13 14:30:54.996886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.951 [2024-10-13 14:30:54.996895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.951 [2024-10-13 14:30:54.996899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38472 len:8 PRP1 0x0 PRP2 0x0 00:34:57.951 [2024-10-13 14:30:54.996904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.951 [2024-10-13 14:30:54.996913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.951 [2024-10-13 14:30:54.996918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38480 len:8 PRP1 0x0 PRP2 0x0 00:34:57.951 [2024-10-13 14:30:54.996924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.951 [2024-10-13 14:30:54.996933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.951 [2024-10-13 14:30:54.996937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38488 len:8 PRP1 0x0 PRP2 0x0 00:34:57.951 [2024-10-13 14:30:54.996942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.951 [2024-10-13 14:30:54.996947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.951 [2024-10-13 14:30:54.996951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.952 [2024-10-13 14:30:54.996955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38496 len:8 PRP1 0x0 PRP2 0x0 00:34:57.952 [2024-10-13 14:30:54.996961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.952 [2024-10-13 14:30:54.996966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.952 [2024-10-13 14:30:54.996970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.952 [2024-10-13 14:30:54.996974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38504 len:8 PRP1 0x0 PRP2 0x0 00:34:57.952 [2024-10-13 14:30:54.996979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.952 [2024-10-13 14:30:54.996985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.952 [2024-10-13 14:30:54.996988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.952 [2024-10-13 14:30:54.996993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37912 len:8 PRP1 0x0 PRP2 0x0 00:34:57.952 [2024-10-13 14:30:54.996998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.952 [2024-10-13 14:30:54.997003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.952 [2024-10-13 14:30:54.997007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.952 [2024-10-13 14:30:54.997011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37920 len:8 PRP1 0x0 PRP2 0x0 00:34:57.952 [2024-10-13 14:30:54.997021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.952 [2024-10-13 14:30:55.009552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.952 [2024-10-13 14:30:55.009575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.952 [2024-10-13 14:30:55.009583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37928 len:8 PRP1 0x0 PRP2 0x0 00:34:57.952 [2024-10-13 14:30:55.009590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.952 [2024-10-13 14:30:55.009596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.952 [2024-10-13 14:30:55.009600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.952 [2024-10-13 14:30:55.009605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37936 len:8 PRP1 0x0 PRP2 0x0 00:34:57.952 [2024-10-13 14:30:55.009610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.952 [2024-10-13 14:30:55.009616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.952 [2024-10-13 14:30:55.009620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.952 [2024-10-13 14:30:55.009624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37944 len:8 PRP1 0x0 PRP2 0x0 00:34:57.952 [2024-10-13 14:30:55.009629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.952 [2024-10-13 14:30:55.009635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:57.952 [2024-10-13 14:30:55.009639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:57.952 [2024-10-13 14:30:55.009643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37952 len:8 PRP1 0x0 PRP2 0x0 00:34:57.952 [2024-10-13 14:30:55.009648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.952 [2024-10-13 14:30:55.009681] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x192f910 was disconnected and freed. reset controller. 00:34:57.952 [2024-10-13 14:30:55.009688] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:34:57.952 [2024-10-13 14:30:55.009710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.952 [2024-10-13 14:30:55.009716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.952 [2024-10-13 14:30:55.009723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.952 [2024-10-13 14:30:55.009729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.952 [2024-10-13 14:30:55.009735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.952 [2024-10-13 14:30:55.009740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.952 [2024-10-13 14:30:55.009746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.952 [2024-10-13 14:30:55.009751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.952 [2024-10-13 14:30:55.009757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:57.952 [2024-10-13 14:30:55.009787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1903cc0 (9): Bad file descriptor 00:34:57.952 [2024-10-13 14:30:55.012245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:57.952 [2024-10-13 14:30:55.082278] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:57.952 12075.40 IOPS, 47.17 MiB/s [2024-10-13T12:31:01.659Z] 12150.36 IOPS, 47.46 MiB/s [2024-10-13T12:31:01.659Z] 12218.50 IOPS, 47.73 MiB/s [2024-10-13T12:31:01.659Z] 12271.85 IOPS, 47.94 MiB/s [2024-10-13T12:31:01.659Z] 12320.36 IOPS, 48.13 MiB/s [2024-10-13T12:31:01.659Z] 12369.53 IOPS, 48.32 MiB/s 00:34:57.952 Latency(us) 00:34:57.952 [2024-10-13T12:31:01.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.952 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:57.952 Verification LBA range: start 0x0 length 0x4000 00:34:57.952 NVMe0n1 : 15.00 12368.40 48.31 871.59 0.00 9645.77 543.99 21458.50 00:34:57.952 [2024-10-13T12:31:01.659Z] =================================================================================================================== 00:34:57.952 [2024-10-13T12:31:01.659Z] Total : 12368.40 48.31 871.59 0.00 9645.77 543.99 21458.50 00:34:57.952 Received shutdown signal, test time was about 15.000000 seconds 00:34:57.952 00:34:57.952 Latency(us) 00:34:57.952 [2024-10-13T12:31:01.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.952 [2024-10-13T12:31:01.659Z] =================================================================================================================== 00:34:57.952 [2024-10-13T12:31:01.659Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:57.952 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:57.952 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:57.952 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:57.952 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1916132 00:34:57.952 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1916132 /var/tmp/bdevperf.sock 00:34:57.952 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:57.952 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1916132 ']' 00:34:57.952 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:57.952 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:57.952 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:57.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:57.952 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:57.952 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:58.523 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:58.523 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:34:58.523 14:31:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:58.523 [2024-10-13 14:31:02.124104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:58.523 14:31:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:58.783 [2024-10-13 14:31:02.304124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:58.783 14:31:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:59.043 NVMe0n1 00:34:59.043 14:31:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:59.613 00:34:59.613 14:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:59.873 00:34:59.873 14:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:59.873 14:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:59.873 14:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:00.134 14:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:35:03.431 14:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:03.431 14:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:35:03.431 14:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1917189 00:35:03.431 14:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:03.431 14:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1917189 00:35:04.372 { 00:35:04.372 "results": [ 00:35:04.372 { 00:35:04.372 "job": "NVMe0n1", 00:35:04.372 "core_mask": "0x1", 00:35:04.372 "workload": "verify", 00:35:04.372 "status": "finished", 00:35:04.372 "verify_range": { 00:35:04.372 "start": 0, 00:35:04.372 "length": 16384 00:35:04.372 }, 00:35:04.372 "queue_depth": 128, 00:35:04.372 "io_size": 4096, 00:35:04.372 "runtime": 1.013051, 00:35:04.372 "iops": 12818.703105766639, 00:35:04.372 "mibps": 50.073059006900934, 00:35:04.372 "io_failed": 0, 00:35:04.372 "io_timeout": 0, 00:35:04.372 "avg_latency_us": 9946.867533048133, 00:35:04.372 "min_latency_us": 1779.084530571333, 00:35:04.372 "max_latency_us": 11167.17674574006 00:35:04.372 } 00:35:04.372 ], 00:35:04.372 "core_count": 1 00:35:04.372 } 00:35:04.372 14:31:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:04.372 [2024-10-13 14:31:01.166960] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:35:04.372 [2024-10-13 14:31:01.167020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1916132 ] 00:35:04.372 [2024-10-13 14:31:01.297757] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:04.372 [2024-10-13 14:31:01.344700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.372 [2024-10-13 14:31:01.359556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.372 [2024-10-13 14:31:03.702819] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:35:04.372 [2024-10-13 14:31:03.702855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:04.372 [2024-10-13 14:31:03.702863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.372 [2024-10-13 14:31:03.702870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:04.372 [2024-10-13 14:31:03.702875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.372 [2024-10-13 14:31:03.702881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:04.372 [2024-10-13 14:31:03.702886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.372 [2024-10-13 14:31:03.702892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:04.372 [2024-10-13 14:31:03.702897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.372 [2024-10-13 14:31:03.702902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.372 [2024-10-13 14:31:03.702924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:04.372 [2024-10-13 14:31:03.702935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c6cc0 (9): Bad file descriptor 00:35:04.372 [2024-10-13 14:31:03.755257] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:04.372 Running I/O for 1 seconds... 00:35:04.372 12731.00 IOPS, 49.73 MiB/s 00:35:04.372 Latency(us) 00:35:04.372 [2024-10-13T12:31:08.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.372 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:04.372 Verification LBA range: start 0x0 length 0x4000 00:35:04.372 NVMe0n1 : 1.01 12818.70 50.07 0.00 0.00 9946.87 1779.08 11167.18 00:35:04.372 [2024-10-13T12:31:08.079Z] =================================================================================================================== 00:35:04.372 [2024-10-13T12:31:08.079Z] Total : 12818.70 50.07 0.00 0.00 9946.87 1779.08 11167.18 00:35:04.372 14:31:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:04.372 14:31:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:35:04.633 14:31:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:04.893 14:31:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:04.893 14:31:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:35:04.893 14:31:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:05.153 14:31:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:35:08.448 14:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:08.448 14:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:35:08.448 14:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1916132 00:35:08.448 14:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1916132 ']' 00:35:08.448 14:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1916132 00:35:08.449 14:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:35:08.449 14:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:08.449 14:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1916132 00:35:08.449 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:08.449 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:08.449 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1916132' 00:35:08.449 killing process with pid 1916132 00:35:08.449 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1916132 00:35:08.449 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1916132 00:35:08.449 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:35:08.449 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:08.709 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:35:08.709 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:08.709 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:35:08.709 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:08.709 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:35:08.709 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:08.709 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:35:08.709 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:08.709 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:08.709 rmmod nvme_tcp 00:35:08.709 rmmod nvme_fabrics 00:35:08.709 rmmod nvme_keyring 00:35:08.709 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:08.709 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:35:08.709 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:35:08.709 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 1912506 ']' 00:35:08.709 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 1912506 00:35:08.709 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1912506 ']' 00:35:08.709 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1912506 00:35:08.709 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:35:08.709 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:08.709 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1912506 00:35:08.970 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:08.970 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:08.970 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1912506' 00:35:08.970 killing process with pid 1912506 00:35:08.970 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1912506 00:35:08.970 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1912506 00:35:08.970 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:08.970 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:08.970 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:08.970 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:35:08.970 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:35:08.970 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:08.970 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:35:08.970 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:08.970 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:08.970 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:08.970 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:08.970 14:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:11.529 00:35:11.529 real 0m40.572s 00:35:11.529 user 2m3.903s 00:35:11.529 sys 0m8.852s 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:11.529 ************************************ 00:35:11.529 END TEST nvmf_failover 00:35:11.529 ************************************ 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.529 ************************************ 00:35:11.529 START TEST nvmf_host_discovery 00:35:11.529 ************************************ 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:35:11.529 * Looking for test storage... 00:35:11.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:11.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.529 --rc genhtml_branch_coverage=1 00:35:11.529 --rc genhtml_function_coverage=1 00:35:11.529 --rc genhtml_legend=1 00:35:11.529 --rc geninfo_all_blocks=1 00:35:11.529 --rc geninfo_unexecuted_blocks=1 00:35:11.529 00:35:11.529 ' 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:11.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.529 --rc genhtml_branch_coverage=1 00:35:11.529 --rc genhtml_function_coverage=1 00:35:11.529 --rc genhtml_legend=1 00:35:11.529 --rc geninfo_all_blocks=1 00:35:11.529 --rc geninfo_unexecuted_blocks=1 00:35:11.529 00:35:11.529 ' 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:11.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.529 --rc genhtml_branch_coverage=1 00:35:11.529 --rc genhtml_function_coverage=1 00:35:11.529 --rc genhtml_legend=1 00:35:11.529 --rc geninfo_all_blocks=1 00:35:11.529 --rc geninfo_unexecuted_blocks=1 00:35:11.529 00:35:11.529 ' 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:11.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.529 --rc genhtml_branch_coverage=1 00:35:11.529 --rc genhtml_function_coverage=1 00:35:11.529 --rc genhtml_legend=1 00:35:11.529 --rc geninfo_all_blocks=1 00:35:11.529 --rc geninfo_unexecuted_blocks=1 00:35:11.529 00:35:11.529 ' 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:11.529 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:11.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:35:11.530 14:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:19.674 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:19.674 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:19.675 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:19.675 Found net devices under 0000:31:00.0: cvl_0_0 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:19.675 Found net devices under 0000:31:00.1: cvl_0_1 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:19.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:19.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:35:19.675 00:35:19.675 --- 10.0.0.2 ping statistics --- 00:35:19.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.675 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:19.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:19.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:35:19.675 00:35:19.675 --- 10.0.0.1 ping statistics --- 00:35:19.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.675 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=1922551 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 1922551 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1922551 ']' 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:19.675 14:31:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.675 [2024-10-13 14:31:22.554687] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:35:19.675 [2024-10-13 14:31:22.554753] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.675 [2024-10-13 14:31:22.696050] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:19.675 [2024-10-13 14:31:22.744173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.675 [2024-10-13 14:31:22.770097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:19.675 [2024-10-13 14:31:22.770142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:19.675 [2024-10-13 14:31:22.770151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:19.675 [2024-10-13 14:31:22.770158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:19.675 [2024-10-13 14:31:22.770164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:19.675 [2024-10-13 14:31:22.770864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.675 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:19.675 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:35:19.675 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:19.675 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:19.675 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.938 [2024-10-13 14:31:23.409376] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.938 [2024-10-13 14:31:23.421515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.938 null0 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.938 null1 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1922753 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1922753 /tmp/host.sock 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1922753 ']' 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:19.938 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:19.938 14:31:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:19.938 [2024-10-13 14:31:23.525082] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:35:19.938 [2024-10-13 14:31:23.525142] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1922753 ] 00:35:20.200 [2024-10-13 14:31:23.656390] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:20.200 [2024-10-13 14:31:23.705724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.200 [2024-10-13 14:31:23.724481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:20.772 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.034 [2024-10-13 14:31:24.653920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:21.034 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:35:21.296 14:31:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:35:21.869 [2024-10-13 14:31:25.324086] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:21.869 [2024-10-13 14:31:25.324108] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:21.869 [2024-10-13 14:31:25.324121] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:21.869 [2024-10-13 14:31:25.413181] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:22.130 [2024-10-13 14:31:25.597579] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:22.130 [2024-10-13 14:31:25.597603] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.391 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:22.392 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.392 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:22.392 14:31:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:22.392 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.653 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.653 [2024-10-13 14:31:26.194234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:22.653 [2024-10-13 14:31:26.194442] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:35:22.653 [2024-10-13 14:31:26.194469] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:22.654 [2024-10-13 14:31:26.282513] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:22.654 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.654 [2024-10-13 14:31:26.345128] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:22.654 [2024-10-13 14:31:26.345153] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:22.654 [2024-10-13 14:31:26.345158] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:22.915 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:35:22.915 14:31:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.857 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:23.857 [2024-10-13 14:31:27.470721] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:35:23.858 [2024-10-13 14:31:27.470745] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:23.858 [2024-10-13 14:31:27.475305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:23.858 [2024-10-13 14:31:27.475323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:23.858 [2024-10-13 14:31:27.475332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:23.858 [2024-10-13 14:31:27.475340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:23.858 [2024-10-13 14:31:27.475348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:23.858 [2024-10-13 14:31:27.475356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:23.858 [2024-10-13 14:31:27.475363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:23.858 [2024-10-13 14:31:27.475371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:23.858 [2024-10-13 14:31:27.475378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fb880 is same with the state(6) to be set 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:23.858 [2024-10-13 14:31:27.485295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fb880 (9): Bad file descriptor 00:35:23.858 [2024-10-13 14:31:27.495311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:23.858 [2024-10-13 14:31:27.495648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.858 [2024-10-13 14:31:27.495663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fb880 with addr=10.0.0.2, port=4420 00:35:23.858 [2024-10-13 14:31:27.495671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fb880 is same with the state(6) to be set 00:35:23.858 [2024-10-13 14:31:27.495684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fb880 (9): Bad file descriptor 00:35:23.858 [2024-10-13 14:31:27.495695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:23.858 [2024-10-13 14:31:27.495702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:23.858 [2024-10-13 14:31:27.495710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:23.858 [2024-10-13 14:31:27.495722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.858 [2024-10-13 14:31:27.505344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:23.858 [2024-10-13 14:31:27.505644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.858 [2024-10-13 14:31:27.505654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fb880 with addr=10.0.0.2, port=4420 00:35:23.858 [2024-10-13 14:31:27.505659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fb880 is same with the state(6) to be set 00:35:23.858 [2024-10-13 14:31:27.505667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fb880 (9): Bad file descriptor 00:35:23.858 [2024-10-13 14:31:27.505675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:23.858 [2024-10-13 14:31:27.505679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:23.858 [2024-10-13 14:31:27.505684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:23.858 [2024-10-13 14:31:27.505692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.858 [2024-10-13 14:31:27.515364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:23.858 [2024-10-13 14:31:27.515558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.858 [2024-10-13 14:31:27.515567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fb880 with addr=10.0.0.2, port=4420 00:35:23.858 [2024-10-13 14:31:27.515572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fb880 is same with the state(6) to be set 00:35:23.858 [2024-10-13 14:31:27.515580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fb880 (9): Bad file descriptor 00:35:23.858 [2024-10-13 14:31:27.515587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:23.858 [2024-10-13 14:31:27.515591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:23.858 [2024-10-13 14:31:27.515596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:23.858 [2024-10-13 14:31:27.515603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.858 [2024-10-13 14:31:27.525386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:23.858 [2024-10-13 14:31:27.525690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.858 [2024-10-13 14:31:27.525698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fb880 with addr=10.0.0.2, port=4420 00:35:23.858 [2024-10-13 14:31:27.525703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fb880 is same with the state(6) to be set 00:35:23.858 [2024-10-13 14:31:27.525714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fb880 (9): Bad file descriptor 00:35:23.858 [2024-10-13 14:31:27.525721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:23.858 [2024-10-13 14:31:27.525725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:23.858 [2024-10-13 14:31:27.525730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:23.858 [2024-10-13 14:31:27.525738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:23.858 [2024-10-13 14:31:27.535407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:23.858 [2024-10-13 14:31:27.535731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.858 [2024-10-13 14:31:27.535743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fb880 with addr=10.0.0.2, port=4420 00:35:23.858 [2024-10-13 14:31:27.535748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fb880 is same with the state(6) to be set 00:35:23.858 [2024-10-13 14:31:27.535756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fb880 (9): Bad file descriptor 00:35:23.858 [2024-10-13 14:31:27.535771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:23.858 [2024-10-13 14:31:27.535778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:23.858 [2024-10-13 14:31:27.535784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:23.858 [2024-10-13 14:31:27.535791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:23.858 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:23.858 [2024-10-13 14:31:27.545429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:23.858 [2024-10-13 14:31:27.545723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.858 [2024-10-13 14:31:27.545732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fb880 with addr=10.0.0.2, port=4420 00:35:23.858 [2024-10-13 14:31:27.545737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fb880 is same with the state(6) to be set 00:35:23.858 [2024-10-13 14:31:27.545745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fb880 (9): Bad file descriptor 00:35:23.858 [2024-10-13 14:31:27.545755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:23.858 [2024-10-13 14:31:27.545760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:23.858 [2024-10-13 14:31:27.545765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:23.858 [2024-10-13 14:31:27.545772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.858 [2024-10-13 14:31:27.555451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:23.858 [2024-10-13 14:31:27.555746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.858 [2024-10-13 14:31:27.555754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fb880 with addr=10.0.0.2, port=4420 00:35:23.858 [2024-10-13 14:31:27.555759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fb880 is same with the state(6) to be set 00:35:23.858 [2024-10-13 14:31:27.555766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fb880 (9): Bad file descriptor 00:35:23.858 [2024-10-13 14:31:27.555774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:23.858 [2024-10-13 14:31:27.555778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:23.859 [2024-10-13 14:31:27.555783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:23.859 [2024-10-13 14:31:27.555790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.135 [2024-10-13 14:31:27.565472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:24.135 [2024-10-13 14:31:27.565767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.135 [2024-10-13 14:31:27.565775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fb880 with addr=10.0.0.2, port=4420 00:35:24.135 [2024-10-13 14:31:27.565780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fb880 is same with the state(6) to be set 00:35:24.135 [2024-10-13 14:31:27.565788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fb880 (9): Bad file descriptor 00:35:24.135 [2024-10-13 14:31:27.565795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:24.135 [2024-10-13 14:31:27.565799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:24.135 [2024-10-13 14:31:27.565804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:24.135 [2024-10-13 14:31:27.565812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.135 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.135 [2024-10-13 14:31:27.575495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:24.135 [2024-10-13 14:31:27.575791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.135 [2024-10-13 14:31:27.575799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fb880 with addr=10.0.0.2, port=4420 00:35:24.135 [2024-10-13 14:31:27.575804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fb880 is same with the state(6) to be set 00:35:24.135 [2024-10-13 14:31:27.575812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fb880 (9): Bad file descriptor 00:35:24.135 [2024-10-13 14:31:27.575820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:24.135 [2024-10-13 14:31:27.575824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:24.135 [2024-10-13 14:31:27.575832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:24.135 [2024-10-13 14:31:27.575844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.135 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:24.135 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:24.135 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:35:24.135 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:35:24.135 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:24.135 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:24.135 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:35:24.135 [2024-10-13 14:31:27.585514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:24.135 [2024-10-13 14:31:27.585811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.135 [2024-10-13 14:31:27.585820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fb880 with addr=10.0.0.2, port=4420 00:35:24.135 [2024-10-13 14:31:27.585825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fb880 is same with the state(6) to be set 00:35:24.135 [2024-10-13 14:31:27.585832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fb880 (9): Bad file descriptor 00:35:24.135 [2024-10-13 14:31:27.585844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:24.135 [2024-10-13 14:31:27.585848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:24.135 [2024-10-13 14:31:27.585853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:24.135 [2024-10-13 14:31:27.585860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.135 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:35:24.135 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:24.135 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:24.135 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:24.135 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.135 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:24.135 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:24.135 [2024-10-13 14:31:27.595536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:24.135 [2024-10-13 14:31:27.595837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.135 [2024-10-13 14:31:27.595846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fb880 with addr=10.0.0.2, port=4420 00:35:24.135 [2024-10-13 14:31:27.595852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fb880 is same with the state(6) to be set 00:35:24.135 [2024-10-13 14:31:27.595859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fb880 (9): Bad file descriptor 00:35:24.135 [2024-10-13 14:31:27.595867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:24.135 [2024-10-13 14:31:27.595871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:24.135 [2024-10-13 14:31:27.595879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:24.136 [2024-10-13 14:31:27.595886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.136 [2024-10-13 14:31:27.599903] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:35:24.136 [2024-10-13 14:31:27.599916] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:24.136 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.136 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:35:24.136 14:31:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:35:25.082 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:35:25.343 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:35:25.344 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:35:25.344 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:35:25.344 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:35:25.344 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.344 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:25.344 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.344 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:35:25.344 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:35:25.344 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:35:25.344 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:35:25.344 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:25.344 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.344 14:31:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:26.285 [2024-10-13 14:31:29.934456] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:26.286 [2024-10-13 14:31:29.934471] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:26.286 [2024-10-13 14:31:29.934480] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:26.546 [2024-10-13 14:31:30.022535] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:35:26.808 [2024-10-13 14:31:30.297117] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:26.808 [2024-10-13 14:31:30.297141] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:26.808 request: 00:35:26.808 { 00:35:26.808 "name": "nvme", 00:35:26.808 "trtype": "tcp", 00:35:26.808 "traddr": "10.0.0.2", 00:35:26.808 "adrfam": "ipv4", 00:35:26.808 "trsvcid": "8009", 00:35:26.808 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:26.808 "wait_for_attach": true, 00:35:26.808 "method": "bdev_nvme_start_discovery", 00:35:26.808 "req_id": 1 00:35:26.808 } 00:35:26.808 Got JSON-RPC error response 00:35:26.808 response: 00:35:26.808 { 00:35:26.808 "code": -17, 00:35:26.808 "message": "File exists" 00:35:26.808 } 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:26.808 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:26.809 request: 00:35:26.809 { 00:35:26.809 "name": "nvme_second", 00:35:26.809 "trtype": "tcp", 00:35:26.809 "traddr": "10.0.0.2", 00:35:26.809 "adrfam": "ipv4", 00:35:26.809 "trsvcid": "8009", 00:35:26.809 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:26.809 "wait_for_attach": true, 00:35:26.809 "method": "bdev_nvme_start_discovery", 00:35:26.809 "req_id": 1 00:35:26.809 } 00:35:26.809 Got JSON-RPC error response 00:35:26.809 response: 00:35:26.809 { 00:35:26.809 "code": -17, 00:35:26.809 "message": "File exists" 00:35:26.809 } 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:26.809 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:35:27.070 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.070 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:35:27.070 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:27.070 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:35:27.070 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:27.070 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:27.070 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:27.070 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:27.070 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:27.070 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:35:27.070 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.070 14:31:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:28.011 [2024-10-13 14:31:31.557978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.011 [2024-10-13 14:31:31.558003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1913a60 with addr=10.0.0.2, port=8010 00:35:28.011 [2024-10-13 14:31:31.558012] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:28.011 [2024-10-13 14:31:31.558018] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:28.011 [2024-10-13 14:31:31.558023] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:35:28.952 [2024-10-13 14:31:32.557836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:28.952 [2024-10-13 14:31:32.557854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1913a60 with addr=10.0.0.2, port=8010 00:35:28.952 [2024-10-13 14:31:32.557862] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:28.952 [2024-10-13 14:31:32.557867] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:28.952 [2024-10-13 14:31:32.557871] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:35:29.890 [2024-10-13 14:31:33.557638] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:35:29.890 request: 00:35:29.890 { 00:35:29.890 "name": "nvme_second", 00:35:29.890 "trtype": "tcp", 00:35:29.890 "traddr": "10.0.0.2", 00:35:29.890 "adrfam": "ipv4", 00:35:29.890 "trsvcid": "8010", 00:35:29.890 "hostnqn": "nqn.2021-12.io.spdk:test", 00:35:29.890 "wait_for_attach": false, 00:35:29.890 "attach_timeout_ms": 3000, 00:35:29.890 "method": "bdev_nvme_start_discovery", 00:35:29.890 "req_id": 1 00:35:29.890 } 00:35:29.890 Got JSON-RPC error response 00:35:29.890 response: 00:35:29.890 { 00:35:29.890 "code": -110, 00:35:29.890 "message": "Connection timed out" 00:35:29.890 } 00:35:29.890 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:29.890 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:35:29.890 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:29.890 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:29.890 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:29.890 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:35:29.890 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:35:29.890 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:35:29.890 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.890 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:35:29.890 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:29.890 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:35:29.890 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1922753 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:30.150 rmmod nvme_tcp 00:35:30.150 rmmod nvme_fabrics 00:35:30.150 rmmod nvme_keyring 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 1922551 ']' 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 1922551 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1922551 ']' 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1922551 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1922551 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:30.150 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:30.151 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1922551' 00:35:30.151 killing process with pid 1922551 00:35:30.151 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1922551 00:35:30.151 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1922551 00:35:30.151 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:30.151 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:30.151 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:30.151 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:35:30.151 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:35:30.151 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:30.151 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:35:30.411 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:30.411 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:30.411 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.411 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:30.411 14:31:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.326 14:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:32.326 00:35:32.326 real 0m21.211s 00:35:32.326 user 0m25.226s 00:35:32.326 sys 0m7.157s 00:35:32.326 14:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:32.326 14:31:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:35:32.326 ************************************ 00:35:32.326 END TEST nvmf_host_discovery 00:35:32.326 ************************************ 00:35:32.326 14:31:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:35:32.326 14:31:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:32.326 14:31:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:32.326 14:31:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.326 ************************************ 00:35:32.326 START TEST nvmf_host_multipath_status 00:35:32.326 ************************************ 00:35:32.326 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:35:32.588 * Looking for test storage... 00:35:32.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:32.588 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:32.588 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:35:32.588 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:32.588 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:32.588 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:32.588 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:32.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.589 --rc genhtml_branch_coverage=1 00:35:32.589 --rc genhtml_function_coverage=1 00:35:32.589 --rc genhtml_legend=1 00:35:32.589 --rc geninfo_all_blocks=1 00:35:32.589 --rc geninfo_unexecuted_blocks=1 00:35:32.589 00:35:32.589 ' 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:32.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.589 --rc genhtml_branch_coverage=1 00:35:32.589 --rc genhtml_function_coverage=1 00:35:32.589 --rc genhtml_legend=1 00:35:32.589 --rc geninfo_all_blocks=1 00:35:32.589 --rc geninfo_unexecuted_blocks=1 00:35:32.589 00:35:32.589 ' 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:32.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.589 --rc genhtml_branch_coverage=1 00:35:32.589 --rc genhtml_function_coverage=1 00:35:32.589 --rc genhtml_legend=1 00:35:32.589 --rc geninfo_all_blocks=1 00:35:32.589 --rc geninfo_unexecuted_blocks=1 00:35:32.589 00:35:32.589 ' 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:32.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.589 --rc genhtml_branch_coverage=1 00:35:32.589 --rc genhtml_function_coverage=1 00:35:32.589 --rc genhtml_legend=1 00:35:32.589 --rc geninfo_all_blocks=1 00:35:32.589 --rc geninfo_unexecuted_blocks=1 00:35:32.589 00:35:32.589 ' 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:32.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:32.589 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:35:32.590 14:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:40.823 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:40.823 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:40.823 Found net devices under 0000:31:00.0: cvl_0_0 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:40.823 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:40.824 Found net devices under 0000:31:00.1: cvl_0_1 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:40.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:40.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:35:40.824 00:35:40.824 --- 10.0.0.2 ping statistics --- 00:35:40.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:40.824 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:40.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:40.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:35:40.824 00:35:40.824 --- 10.0.0.1 ping statistics --- 00:35:40.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:40.824 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=1929152 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 1929152 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1929152 ']' 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:40.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:40.824 14:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:40.824 [2024-10-13 14:31:44.013019] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:35:40.824 [2024-10-13 14:31:44.013105] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:40.824 [2024-10-13 14:31:44.155213] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:40.824 [2024-10-13 14:31:44.203604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:40.824 [2024-10-13 14:31:44.230639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:40.824 [2024-10-13 14:31:44.230690] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:40.824 [2024-10-13 14:31:44.230698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:40.824 [2024-10-13 14:31:44.230705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:40.824 [2024-10-13 14:31:44.230711] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:40.824 [2024-10-13 14:31:44.232444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.824 [2024-10-13 14:31:44.232447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:41.444 14:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:41.444 14:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:35:41.444 14:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:41.444 14:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:41.444 14:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:41.444 14:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:41.444 14:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1929152 00:35:41.444 14:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:41.444 [2024-10-13 14:31:45.053167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:41.444 14:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:35:41.705 Malloc0 00:35:41.705 14:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:35:41.966 14:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:42.227 14:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:42.227 [2024-10-13 14:31:45.881058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:42.227 14:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:42.488 [2024-10-13 14:31:46.077012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:42.488 14:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1929527 00:35:42.488 14:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:42.488 14:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1929527 /var/tmp/bdevperf.sock 00:35:42.488 14:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1929527 ']' 00:35:42.488 14:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:42.488 14:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:35:42.488 14:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:42.488 14:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:42.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:42.488 14:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:42.488 14:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:43.432 14:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:43.432 14:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:35:43.433 14:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:35:43.693 14:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:35:43.954 Nvme0n1 00:35:44.216 14:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:35:44.476 Nvme0n1 00:35:44.476 14:31:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:35:44.476 14:31:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:35:47.020 14:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:35:47.020 14:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:47.020 14:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:47.020 14:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:35:47.962 14:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:35:47.962 14:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:47.962 14:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:47.962 14:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:48.223 14:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:48.223 14:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:48.223 14:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.223 14:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:48.223 14:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:48.223 14:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:48.223 14:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.223 14:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:48.484 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:48.484 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:48.484 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.484 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:48.745 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:48.745 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:48.745 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.745 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:48.745 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:48.745 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:48.745 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:48.745 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:49.006 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:49.006 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:35:49.006 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:49.267 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:49.267 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:35:50.650 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:35:50.650 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:50.650 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:50.650 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:50.650 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:50.650 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:50.650 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:50.650 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:50.650 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:50.650 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:50.650 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:50.650 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:50.911 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:50.911 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:50.911 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:50.911 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:51.171 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:51.171 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:51.171 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:51.171 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:51.432 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:51.432 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:51.432 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:51.432 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:51.432 14:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:51.432 14:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:35:51.432 14:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:51.692 14:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:51.952 14:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:35:52.895 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:35:52.895 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:52.895 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:52.895 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:53.156 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:53.156 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:53.156 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:53.156 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:53.156 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:53.156 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:53.156 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:53.156 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:53.416 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:53.416 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:53.416 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:53.416 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:53.677 14:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:53.677 14:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:53.677 14:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:53.677 14:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:53.677 14:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:53.677 14:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:53.677 14:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:53.677 14:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:53.936 14:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:53.936 14:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:35:53.936 14:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:54.197 14:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:54.197 14:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:35:55.581 14:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:35:55.581 14:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:55.581 14:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:55.581 14:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:55.581 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:55.581 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:55.581 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:55.581 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:55.581 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:55.581 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:55.581 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:55.581 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:55.842 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:55.842 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:55.842 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:55.842 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:56.102 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:56.102 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:56.102 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:56.102 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:56.102 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:56.102 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:56.102 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:56.102 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:56.362 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:56.362 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:35:56.362 14:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:56.622 14:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:56.622 14:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:35:58.016 14:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:35:58.016 14:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:58.016 14:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:58.016 14:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:58.016 14:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:58.016 14:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:58.016 14:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:58.016 14:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:58.016 14:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:58.016 14:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:58.016 14:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:58.016 14:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:58.277 14:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:58.277 14:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:58.277 14:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:58.277 14:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:58.538 14:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:58.538 14:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:58.538 14:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:58.538 14:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:58.538 14:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:58.538 14:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:58.538 14:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:58.538 14:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:58.797 14:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:58.797 14:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:35:58.797 14:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:59.057 14:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:59.318 14:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:36:00.259 14:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:36:00.259 14:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:00.259 14:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:00.259 14:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:00.520 14:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:00.520 14:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:00.520 14:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:00.520 14:32:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:00.520 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:00.520 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:00.520 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:00.520 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:00.780 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:00.780 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:00.780 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:00.780 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:01.041 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:01.041 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:36:01.041 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:01.041 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:01.041 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:01.041 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:01.041 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:01.041 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:01.301 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:01.302 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:36:01.562 14:32:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:36:01.562 14:32:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:36:01.562 14:32:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:01.822 14:32:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:36:02.763 14:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:36:02.763 14:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:02.763 14:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:02.763 14:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:03.023 14:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:03.024 14:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:03.024 14:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:03.024 14:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:03.284 14:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:03.284 14:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:03.284 14:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:03.284 14:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:03.284 14:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:03.284 14:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:03.284 14:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:03.284 14:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:03.546 14:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:03.546 14:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:03.546 14:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:03.546 14:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:03.807 14:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:03.807 14:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:03.807 14:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:03.807 14:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:04.066 14:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:04.066 14:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:36:04.066 14:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:04.066 14:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:04.326 14:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:36:05.265 14:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:36:05.265 14:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:05.265 14:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:05.265 14:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:05.525 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:05.525 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:05.525 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:05.525 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:05.786 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:05.786 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:05.786 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:05.786 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:06.046 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:06.046 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:06.046 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:06.046 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:06.046 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:06.046 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:06.046 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:06.046 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:06.307 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:06.307 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:06.307 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:06.307 14:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:06.569 14:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:06.569 14:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:36:06.569 14:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:06.569 14:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:36:06.829 14:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:36:07.768 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:36:07.768 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:07.768 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:07.768 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:08.027 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:08.027 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:08.027 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:08.027 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:08.286 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:08.286 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:08.286 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:08.286 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:08.547 14:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:08.547 14:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:08.547 14:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:08.547 14:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:08.547 14:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:08.547 14:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:08.547 14:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:08.547 14:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:08.808 14:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:08.808 14:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:08.808 14:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:08.808 14:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:09.069 14:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:09.069 14:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:36:09.069 14:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:09.069 14:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:36:09.330 14:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:36:10.272 14:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:36:10.272 14:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:10.272 14:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:10.272 14:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:10.533 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:10.533 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:10.533 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:10.533 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:10.794 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:10.794 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:10.794 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:10.794 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:10.794 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:10.794 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:10.794 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:10.794 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:11.054 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:11.054 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:11.054 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:11.054 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:11.315 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:11.315 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:36:11.315 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:11.315 14:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:11.579 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:11.579 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1929527 00:36:11.579 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1929527 ']' 00:36:11.579 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1929527 00:36:11.579 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:36:11.579 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:11.579 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1929527 00:36:11.579 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:36:11.579 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:36:11.579 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1929527' 00:36:11.579 killing process with pid 1929527 00:36:11.579 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1929527 00:36:11.579 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1929527 00:36:11.579 { 00:36:11.579 "results": [ 00:36:11.579 { 00:36:11.579 "job": "Nvme0n1", 00:36:11.579 "core_mask": "0x4", 00:36:11.579 "workload": "verify", 00:36:11.579 "status": "terminated", 00:36:11.579 "verify_range": { 00:36:11.579 "start": 0, 00:36:11.579 "length": 16384 00:36:11.579 }, 00:36:11.579 "queue_depth": 128, 00:36:11.579 "io_size": 4096, 00:36:11.579 "runtime": 26.889565, 00:36:11.579 "iops": 11973.864210893706, 00:36:11.579 "mibps": 46.77290707380354, 00:36:11.579 "io_failed": 0, 00:36:11.579 "io_timeout": 0, 00:36:11.579 "avg_latency_us": 10669.988551714443, 00:36:11.579 "min_latency_us": 412.268626795857, 00:36:11.579 "max_latency_us": 3012948.0788506516 00:36:11.579 } 00:36:11.579 ], 00:36:11.579 "core_count": 1 00:36:11.579 } 00:36:11.579 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1929527 00:36:11.579 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:11.579 [2024-10-13 14:31:46.164966] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:36:11.579 [2024-10-13 14:31:46.165052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1929527 ] 00:36:11.579 [2024-10-13 14:31:46.300122] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:11.579 [2024-10-13 14:31:46.349846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:11.579 [2024-10-13 14:31:46.376964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:11.579 Running I/O for 90 seconds... 00:36:11.579 11283.00 IOPS, 44.07 MiB/s [2024-10-13T12:32:15.286Z] 11226.00 IOPS, 43.85 MiB/s [2024-10-13T12:32:15.286Z] 11188.33 IOPS, 43.70 MiB/s [2024-10-13T12:32:15.286Z] 11601.50 IOPS, 45.32 MiB/s [2024-10-13T12:32:15.286Z] 11825.80 IOPS, 46.19 MiB/s [2024-10-13T12:32:15.286Z] 11995.17 IOPS, 46.86 MiB/s [2024-10-13T12:32:15.286Z] 12150.14 IOPS, 47.46 MiB/s [2024-10-13T12:32:15.286Z] 12237.00 IOPS, 47.80 MiB/s [2024-10-13T12:32:15.286Z] 12309.78 IOPS, 48.09 MiB/s [2024-10-13T12:32:15.286Z] 12361.20 IOPS, 48.29 MiB/s [2024-10-13T12:32:15.287Z] 12435.45 IOPS, 48.58 MiB/s [2024-10-13T12:32:15.287Z] [2024-10-13 14:32:00.106866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.106901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.106931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.106938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.106949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.106954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.106965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.106970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.106981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.106986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.106996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.580 [2024-10-13 14:32:00.107785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:11.580 [2024-10-13 14:32:00.107922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.107929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.107942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.107947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.107959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.107964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.107976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.107981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.107994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.108984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.108997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.109002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.109016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.109021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.109034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.109041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.109055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.581 [2024-10-13 14:32:00.109060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.109078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.581 [2024-10-13 14:32:00.109083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.109097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.581 [2024-10-13 14:32:00.109102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.109116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.581 [2024-10-13 14:32:00.109121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.109134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.581 [2024-10-13 14:32:00.109139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.109152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.581 [2024-10-13 14:32:00.109158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.109171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.581 [2024-10-13 14:32:00.109176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.109189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.581 [2024-10-13 14:32:00.109195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.109208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.581 [2024-10-13 14:32:00.109213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.109227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.581 [2024-10-13 14:32:00.109232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:11.581 [2024-10-13 14:32:00.109276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.581 [2024-10-13 14:32:00.109282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.582 [2024-10-13 14:32:00.109608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.582 [2024-10-13 14:32:00.109730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.582 [2024-10-13 14:32:00.109750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.582 [2024-10-13 14:32:00.109770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.109985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.109992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.110009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.110014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.110030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.110036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.110051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.110057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.110077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.110082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.110098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.110103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.110119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.110124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.110140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.110147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.110163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.110169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.110185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.110190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:11.582 [2024-10-13 14:32:00.110206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.582 [2024-10-13 14:32:00.110211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:00.110226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.583 [2024-10-13 14:32:00.110232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:00.110247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.583 [2024-10-13 14:32:00.110252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:00.110268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.583 [2024-10-13 14:32:00.110273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:00.110289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.583 [2024-10-13 14:32:00.110294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:00.110310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:00.110315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:11.583 12354.75 IOPS, 48.26 MiB/s [2024-10-13T12:32:15.290Z] 11404.38 IOPS, 44.55 MiB/s [2024-10-13T12:32:15.290Z] 10589.79 IOPS, 41.37 MiB/s [2024-10-13T12:32:15.290Z] 9981.20 IOPS, 38.99 MiB/s [2024-10-13T12:32:15.290Z] 10170.88 IOPS, 39.73 MiB/s [2024-10-13T12:32:15.290Z] 10327.29 IOPS, 40.34 MiB/s [2024-10-13T12:32:15.290Z] 10674.94 IOPS, 41.70 MiB/s [2024-10-13T12:32:15.290Z] 10992.37 IOPS, 42.94 MiB/s [2024-10-13T12:32:15.290Z] 11192.50 IOPS, 43.72 MiB/s [2024-10-13T12:32:15.290Z] 11278.19 IOPS, 44.06 MiB/s [2024-10-13T12:32:15.290Z] 11354.86 IOPS, 44.35 MiB/s [2024-10-13T12:32:15.290Z] 11548.17 IOPS, 45.11 MiB/s [2024-10-13T12:32:15.290Z] 11760.54 IOPS, 45.94 MiB/s [2024-10-13T12:32:15.290Z] [2024-10-13 14:32:12.908883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.908921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.908948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.908954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.908965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.908976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.908986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.908991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.583 [2024-10-13 14:32:12.909105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:11.583 [2024-10-13 14:32:12.909975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.583 [2024-10-13 14:32:12.909980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.909993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.584 [2024-10-13 14:32:12.909998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.584 [2024-10-13 14:32:12.910014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.584 [2024-10-13 14:32:12.910029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.584 [2024-10-13 14:32:12.910044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.584 [2024-10-13 14:32:12.910059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.584 [2024-10-13 14:32:12.910079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.584 [2024-10-13 14:32:12.910094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.584 [2024-10-13 14:32:12.910109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.584 [2024-10-13 14:32:12.910125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.584 [2024-10-13 14:32:12.910141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.584 [2024-10-13 14:32:12.910157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.584 [2024-10-13 14:32:12.910173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.584 [2024-10-13 14:32:12.910190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.584 [2024-10-13 14:32:12.910205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.584 [2024-10-13 14:32:12.910220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.584 [2024-10-13 14:32:12.910235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.584 [2024-10-13 14:32:12.910251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.584 [2024-10-13 14:32:12.910266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.584 [2024-10-13 14:32:12.910281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:11.584 [2024-10-13 14:32:12.910297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.584 [2024-10-13 14:32:12.910890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.584 [2024-10-13 14:32:12.910907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.584 [2024-10-13 14:32:12.910923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.584 [2024-10-13 14:32:12.910938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.584 [2024-10-13 14:32:12.910956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.584 [2024-10-13 14:32:12.910972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.584 [2024-10-13 14:32:12.910987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.910997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.584 [2024-10-13 14:32:12.911003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.911013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.584 [2024-10-13 14:32:12.911018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.911029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.584 [2024-10-13 14:32:12.911034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.911044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.584 [2024-10-13 14:32:12.911049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.911060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.584 [2024-10-13 14:32:12.911070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.911081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.584 [2024-10-13 14:32:12.911087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.911097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.584 [2024-10-13 14:32:12.911102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.911113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.584 [2024-10-13 14:32:12.911118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.911128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.584 [2024-10-13 14:32:12.911133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:11.584 [2024-10-13 14:32:12.911144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.584 [2024-10-13 14:32:12.911151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:11.584 11911.44 IOPS, 46.53 MiB/s [2024-10-13T12:32:15.291Z] 11949.54 IOPS, 46.68 MiB/s [2024-10-13T12:32:15.291Z] Received shutdown signal, test time was about 26.890177 seconds 00:36:11.584 00:36:11.584 Latency(us) 00:36:11.584 [2024-10-13T12:32:15.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:11.584 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:36:11.584 Verification LBA range: start 0x0 length 0x4000 00:36:11.584 Nvme0n1 : 26.89 11973.86 46.77 0.00 0.00 10669.99 412.27 3012948.08 00:36:11.584 [2024-10-13T12:32:15.291Z] =================================================================================================================== 00:36:11.584 [2024-10-13T12:32:15.291Z] Total : 11973.86 46.77 0.00 0.00 10669.99 412.27 3012948.08 00:36:11.584 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:11.845 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:36:11.845 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:11.845 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:36:11.845 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:11.845 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:36:11.845 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:11.845 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:36:11.845 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:11.845 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:11.845 rmmod nvme_tcp 00:36:11.845 rmmod nvme_fabrics 00:36:11.845 rmmod nvme_keyring 00:36:11.845 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:11.845 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:36:11.845 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:36:11.845 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 1929152 ']' 00:36:11.845 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 1929152 00:36:11.845 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1929152 ']' 00:36:11.845 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1929152 00:36:11.845 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:36:11.845 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:11.846 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1929152 00:36:11.846 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:11.846 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:11.846 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1929152' 00:36:11.846 killing process with pid 1929152 00:36:11.846 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1929152 00:36:11.846 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1929152 00:36:12.106 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:12.106 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:12.106 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:12.106 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:36:12.106 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:12.106 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:36:12.106 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:36:12.106 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:12.106 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:12.106 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.106 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:12.106 14:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:14.019 14:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:14.019 00:36:14.019 real 0m41.705s 00:36:14.019 user 1m47.412s 00:36:14.019 sys 0m11.534s 00:36:14.019 14:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:14.019 14:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:14.019 ************************************ 00:36:14.019 END TEST nvmf_host_multipath_status 00:36:14.019 ************************************ 00:36:14.281 14:32:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:36:14.281 14:32:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:14.281 14:32:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:14.281 14:32:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.281 ************************************ 00:36:14.281 START TEST nvmf_discovery_remove_ifc 00:36:14.281 ************************************ 00:36:14.281 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:36:14.281 * Looking for test storage... 00:36:14.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:14.281 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:14.281 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:36:14.281 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:14.543 14:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:36:14.543 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:36:14.543 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:36:14.543 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:36:14.543 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:14.543 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:36:14.543 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:36:14.543 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:14.543 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:14.543 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:36:14.543 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:14.543 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:14.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.543 --rc genhtml_branch_coverage=1 00:36:14.543 --rc genhtml_function_coverage=1 00:36:14.543 --rc genhtml_legend=1 00:36:14.543 --rc geninfo_all_blocks=1 00:36:14.543 --rc geninfo_unexecuted_blocks=1 00:36:14.543 00:36:14.543 ' 00:36:14.543 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:14.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.543 --rc genhtml_branch_coverage=1 00:36:14.543 --rc genhtml_function_coverage=1 00:36:14.543 --rc genhtml_legend=1 00:36:14.543 --rc geninfo_all_blocks=1 00:36:14.543 --rc geninfo_unexecuted_blocks=1 00:36:14.543 00:36:14.543 ' 00:36:14.543 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:14.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.543 --rc genhtml_branch_coverage=1 00:36:14.543 --rc genhtml_function_coverage=1 00:36:14.543 --rc genhtml_legend=1 00:36:14.543 --rc geninfo_all_blocks=1 00:36:14.543 --rc geninfo_unexecuted_blocks=1 00:36:14.543 00:36:14.543 ' 00:36:14.543 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:14.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.544 --rc genhtml_branch_coverage=1 00:36:14.544 --rc genhtml_function_coverage=1 00:36:14.544 --rc genhtml_legend=1 00:36:14.544 --rc geninfo_all_blocks=1 00:36:14.544 --rc geninfo_unexecuted_blocks=1 00:36:14.544 00:36:14.544 ' 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:14.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:36:14.544 14:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:22.689 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:22.689 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:22.689 Found net devices under 0000:31:00.0: cvl_0_0 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:22.689 Found net devices under 0000:31:00.1: cvl_0_1 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:22.689 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:22.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:22.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:36:22.690 00:36:22.690 --- 10.0.0.2 ping statistics --- 00:36:22.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.690 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:22.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:22.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:36:22.690 00:36:22.690 --- 10.0.0.1 ping statistics --- 00:36:22.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.690 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=1939485 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 1939485 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1939485 ']' 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:22.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:22.690 14:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:22.690 [2024-10-13 14:32:25.854043] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:36:22.690 [2024-10-13 14:32:25.854115] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:22.690 [2024-10-13 14:32:25.995828] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:22.690 [2024-10-13 14:32:26.045706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.690 [2024-10-13 14:32:26.071649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:22.690 [2024-10-13 14:32:26.071698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:22.690 [2024-10-13 14:32:26.071707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:22.690 [2024-10-13 14:32:26.071714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:22.690 [2024-10-13 14:32:26.071727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:22.690 [2024-10-13 14:32:26.072500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:23.262 14:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:23.262 14:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:36:23.262 14:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:23.262 14:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:23.262 14:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:23.262 14:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:23.262 14:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:36:23.262 14:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.262 14:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:23.262 [2024-10-13 14:32:26.743482] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:23.262 [2024-10-13 14:32:26.751759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:36:23.262 null0 00:36:23.262 [2024-10-13 14:32:26.783608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:23.262 14:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.262 14:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1939667 00:36:23.262 14:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1939667 /tmp/host.sock 00:36:23.262 14:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:36:23.262 14:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1939667 ']' 00:36:23.262 14:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:36:23.262 14:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:23.262 14:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:36:23.262 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:36:23.262 14:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:23.262 14:32:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:23.262 [2024-10-13 14:32:26.861571] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:36:23.262 [2024-10-13 14:32:26.861634] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1939667 ] 00:36:23.523 [2024-10-13 14:32:26.996423] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:23.523 [2024-10-13 14:32:27.044270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:23.523 [2024-10-13 14:32:27.072682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.093 14:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:24.093 14:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:36:24.093 14:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:24.093 14:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:36:24.093 14:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.093 14:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:24.093 14:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.093 14:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:36:24.093 14:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.093 14:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:24.093 14:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.093 14:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:36:24.093 14:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.093 14:32:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:25.046 [2024-10-13 14:32:28.736764] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:25.046 [2024-10-13 14:32:28.736787] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:25.046 [2024-10-13 14:32:28.736800] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:25.307 [2024-10-13 14:32:28.823866] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:36:25.307 [2024-10-13 14:32:28.928203] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:36:25.307 [2024-10-13 14:32:28.928256] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:36:25.307 [2024-10-13 14:32:28.928278] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:36:25.307 [2024-10-13 14:32:28.928292] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:36:25.307 [2024-10-13 14:32:28.928313] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:25.307 14:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.307 14:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:36:25.307 14:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:25.307 14:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:25.307 14:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:25.307 14:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.307 14:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:25.307 14:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:25.307 14:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:25.307 [2024-10-13 14:32:28.936090] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1674290 was disconnected and freed. delete nvme_qpair. 00:36:25.307 14:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.307 14:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:36:25.307 14:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:36:25.307 14:32:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:36:25.567 14:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:36:25.567 14:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:25.567 14:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:25.567 14:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:25.567 14:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.567 14:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:25.567 14:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:25.567 14:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:25.567 14:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.567 14:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:25.567 14:32:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:26.509 14:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:26.509 14:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:26.509 14:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:26.509 14:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.509 14:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:26.509 14:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:26.509 14:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:26.509 14:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.770 14:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:26.770 14:32:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:27.826 14:32:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:27.826 14:32:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:27.826 14:32:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:27.826 14:32:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.826 14:32:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:27.826 14:32:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:27.826 14:32:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:27.826 14:32:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.826 14:32:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:27.826 14:32:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:28.765 14:32:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:28.765 14:32:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:28.765 14:32:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:28.765 14:32:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.765 14:32:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:28.765 14:32:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:28.765 14:32:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:28.765 14:32:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.765 14:32:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:28.765 14:32:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:29.706 14:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:29.706 14:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:29.706 14:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:29.706 14:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.706 14:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:29.706 14:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:29.706 14:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:29.706 14:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.706 14:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:29.706 14:32:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:31.088 [2024-10-13 14:32:34.356104] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:36:31.088 [2024-10-13 14:32:34.356142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:31.088 [2024-10-13 14:32:34.356151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:31.088 [2024-10-13 14:32:34.356158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:31.089 [2024-10-13 14:32:34.356164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:31.089 [2024-10-13 14:32:34.356170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:31.089 [2024-10-13 14:32:34.356175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:31.089 [2024-10-13 14:32:34.356181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:31.089 [2024-10-13 14:32:34.356186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:31.089 [2024-10-13 14:32:34.356192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:36:31.089 [2024-10-13 14:32:34.356198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:31.089 [2024-10-13 14:32:34.356203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1650b40 is same with the state(6) to be set 00:36:31.089 [2024-10-13 14:32:34.366102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1650b40 (9): Bad file descriptor 00:36:31.089 [2024-10-13 14:32:34.376115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:31.089 14:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:31.089 14:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:31.089 14:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:31.089 14:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.089 14:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:31.089 14:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:31.089 14:32:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:32.030 [2024-10-13 14:32:35.433217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:36:32.030 [2024-10-13 14:32:35.433306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1650b40 with addr=10.0.0.2, port=4420 00:36:32.030 [2024-10-13 14:32:35.433337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1650b40 is same with the state(6) to be set 00:36:32.030 [2024-10-13 14:32:35.433392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1650b40 (9): Bad file descriptor 00:36:32.030 [2024-10-13 14:32:35.434518] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:32.030 [2024-10-13 14:32:35.434587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:32.030 [2024-10-13 14:32:35.434608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:32.030 [2024-10-13 14:32:35.434631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:32.030 [2024-10-13 14:32:35.434695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.030 [2024-10-13 14:32:35.434720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:32.030 14:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.030 14:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:36:32.030 14:32:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:32.970 [2024-10-13 14:32:36.434775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:32.970 [2024-10-13 14:32:36.434792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:32.970 [2024-10-13 14:32:36.434797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:32.970 [2024-10-13 14:32:36.434803] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:36:32.970 [2024-10-13 14:32:36.434812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:32.970 [2024-10-13 14:32:36.434826] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:36:32.970 [2024-10-13 14:32:36.434843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:32.970 [2024-10-13 14:32:36.434851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:32.970 [2024-10-13 14:32:36.434858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:32.970 [2024-10-13 14:32:36.434863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:32.970 [2024-10-13 14:32:36.434869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:32.970 [2024-10-13 14:32:36.434875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:32.970 [2024-10-13 14:32:36.434880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:32.970 [2024-10-13 14:32:36.434894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:32.970 [2024-10-13 14:32:36.434901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:36:32.970 [2024-10-13 14:32:36.434906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:32.970 [2024-10-13 14:32:36.434911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:36:32.970 [2024-10-13 14:32:36.435349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1640250 (9): Bad file descriptor 00:36:32.970 [2024-10-13 14:32:36.436357] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:36:32.970 [2024-10-13 14:32:36.436364] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:36:32.970 14:32:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:34.354 14:32:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:34.354 14:32:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:34.354 14:32:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:34.354 14:32:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.354 14:32:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:34.354 14:32:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:34.354 14:32:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:34.354 14:32:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.354 14:32:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:36:34.354 14:32:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:34.923 [2024-10-13 14:32:38.491018] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:34.923 [2024-10-13 14:32:38.491031] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:34.923 [2024-10-13 14:32:38.491040] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:34.923 [2024-10-13 14:32:38.623130] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:36:35.183 14:32:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:35.183 14:32:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:35.183 14:32:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:35.183 14:32:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.183 14:32:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:35.183 14:32:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:35.183 14:32:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:35.183 14:32:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.183 14:32:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:36:35.183 14:32:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:36:35.183 [2024-10-13 14:32:38.843529] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:36:35.183 [2024-10-13 14:32:38.843561] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:36:35.183 [2024-10-13 14:32:38.843575] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:36:35.183 [2024-10-13 14:32:38.843586] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:36:35.183 [2024-10-13 14:32:38.843591] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:35.183 [2024-10-13 14:32:38.847282] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1654e60 was disconnected and freed. delete nvme_qpair. 00:36:36.123 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:36:36.123 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:36.123 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:36:36.123 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.123 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:36:36.123 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:36.123 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:36:36.123 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.123 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:36:36.123 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:36:36.123 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1939667 00:36:36.123 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1939667 ']' 00:36:36.123 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1939667 00:36:36.384 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:36:36.384 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:36.384 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1939667 00:36:36.384 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:36.384 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:36.384 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1939667' 00:36:36.384 killing process with pid 1939667 00:36:36.384 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1939667 00:36:36.384 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1939667 00:36:36.384 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:36:36.384 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:36.384 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:36:36.384 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:36.384 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:36:36.384 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:36.384 14:32:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:36.384 rmmod nvme_tcp 00:36:36.384 rmmod nvme_fabrics 00:36:36.384 rmmod nvme_keyring 00:36:36.384 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:36.384 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:36:36.384 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:36:36.384 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 1939485 ']' 00:36:36.384 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 1939485 00:36:36.384 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1939485 ']' 00:36:36.384 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1939485 00:36:36.384 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:36:36.384 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:36.384 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1939485 00:36:36.645 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:36.645 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:36.645 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1939485' 00:36:36.645 killing process with pid 1939485 00:36:36.645 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1939485 00:36:36.645 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1939485 00:36:36.645 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:36.645 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:36.645 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:36.645 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:36:36.645 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:36:36.645 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:36.645 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:36:36.645 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:36.645 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:36.645 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:36.645 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:36.645 14:32:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:39.189 00:36:39.189 real 0m24.504s 00:36:39.189 user 0m28.997s 00:36:39.189 sys 0m7.313s 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:36:39.189 ************************************ 00:36:39.189 END TEST nvmf_discovery_remove_ifc 00:36:39.189 ************************************ 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.189 ************************************ 00:36:39.189 START TEST nvmf_identify_kernel_target 00:36:39.189 ************************************ 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:36:39.189 * Looking for test storage... 00:36:39.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:39.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.189 --rc genhtml_branch_coverage=1 00:36:39.189 --rc genhtml_function_coverage=1 00:36:39.189 --rc genhtml_legend=1 00:36:39.189 --rc geninfo_all_blocks=1 00:36:39.189 --rc geninfo_unexecuted_blocks=1 00:36:39.189 00:36:39.189 ' 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:39.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.189 --rc genhtml_branch_coverage=1 00:36:39.189 --rc genhtml_function_coverage=1 00:36:39.189 --rc genhtml_legend=1 00:36:39.189 --rc geninfo_all_blocks=1 00:36:39.189 --rc geninfo_unexecuted_blocks=1 00:36:39.189 00:36:39.189 ' 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:39.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.189 --rc genhtml_branch_coverage=1 00:36:39.189 --rc genhtml_function_coverage=1 00:36:39.189 --rc genhtml_legend=1 00:36:39.189 --rc geninfo_all_blocks=1 00:36:39.189 --rc geninfo_unexecuted_blocks=1 00:36:39.189 00:36:39.189 ' 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:39.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.189 --rc genhtml_branch_coverage=1 00:36:39.189 --rc genhtml_function_coverage=1 00:36:39.189 --rc genhtml_legend=1 00:36:39.189 --rc geninfo_all_blocks=1 00:36:39.189 --rc geninfo_unexecuted_blocks=1 00:36:39.189 00:36:39.189 ' 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:39.189 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:39.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:36:39.190 14:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:47.336 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:47.336 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:36:47.336 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:47.336 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:47.336 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:47.336 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:47.336 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:47.336 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:36:47.336 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:47.336 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:36:47.336 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:36:47.336 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:36:47.336 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:36:47.336 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:36:47.336 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:36:47.336 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:47.336 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:47.336 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:47.337 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:47.337 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:47.337 Found net devices under 0000:31:00.0: cvl_0_0 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:47.337 Found net devices under 0000:31:00.1: cvl_0_1 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:47.337 14:32:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:47.337 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:47.337 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:47.337 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:47.337 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:47.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:47.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:36:47.337 00:36:47.337 --- 10.0.0.2 ping statistics --- 00:36:47.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:47.337 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:36:47.337 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:47.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:47.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:36:47.337 00:36:47.337 --- 10.0.0.1 ping statistics --- 00:36:47.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:47.337 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:36:47.337 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:47.337 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:36:47.337 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:47.337 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:47.337 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:47.337 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:47.337 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:47.337 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:47.337 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:47.337 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:36:47.337 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:47.338 14:32:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:49.881 Waiting for block devices as requested 00:36:50.141 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:50.141 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:50.141 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:50.402 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:50.402 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:50.402 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:50.663 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:50.663 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:50.663 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:50.924 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:50.924 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:50.924 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:51.185 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:51.185 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:51.185 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:51.446 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:51.446 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:51.707 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:36:51.707 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:51.707 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:36:51.707 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:36:51.707 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:51.707 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:51.707 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:36:51.707 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:51.707 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:51.707 No valid GPT data, bailing 00:36:51.707 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:51.707 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:36:51.707 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:36:51.707 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:36:51.707 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:36:51.707 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:51.707 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:51.968 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:51.968 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:51.968 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:36:51.968 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:36:51.968 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:36:51.968 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:36:51.968 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:36:51.968 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:36:51.968 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:36:51.968 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:51.968 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:36:51.968 00:36:51.968 Discovery Log Number of Records 2, Generation counter 2 00:36:51.968 =====Discovery Log Entry 0====== 00:36:51.968 trtype: tcp 00:36:51.968 adrfam: ipv4 00:36:51.968 subtype: current discovery subsystem 00:36:51.968 treq: not specified, sq flow control disable supported 00:36:51.968 portid: 1 00:36:51.968 trsvcid: 4420 00:36:51.968 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:51.968 traddr: 10.0.0.1 00:36:51.968 eflags: none 00:36:51.968 sectype: none 00:36:51.968 =====Discovery Log Entry 1====== 00:36:51.968 trtype: tcp 00:36:51.968 adrfam: ipv4 00:36:51.968 subtype: nvme subsystem 00:36:51.968 treq: not specified, sq flow control disable supported 00:36:51.968 portid: 1 00:36:51.968 trsvcid: 4420 00:36:51.968 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:51.968 traddr: 10.0.0.1 00:36:51.968 eflags: none 00:36:51.968 sectype: none 00:36:51.968 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:36:51.968 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:36:52.231 ===================================================== 00:36:52.231 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:36:52.231 ===================================================== 00:36:52.231 Controller Capabilities/Features 00:36:52.231 ================================ 00:36:52.231 Vendor ID: 0000 00:36:52.231 Subsystem Vendor ID: 0000 00:36:52.231 Serial Number: 2b43f06cfe2239008c73 00:36:52.231 Model Number: Linux 00:36:52.231 Firmware Version: 6.8.9-20 00:36:52.231 Recommended Arb Burst: 0 00:36:52.231 IEEE OUI Identifier: 00 00 00 00:36:52.231 Multi-path I/O 00:36:52.231 May have multiple subsystem ports: No 00:36:52.231 May have multiple controllers: No 00:36:52.231 Associated with SR-IOV VF: No 00:36:52.231 Max Data Transfer Size: Unlimited 00:36:52.231 Max Number of Namespaces: 0 00:36:52.231 Max Number of I/O Queues: 1024 00:36:52.231 NVMe Specification Version (VS): 1.3 00:36:52.231 NVMe Specification Version (Identify): 1.3 00:36:52.231 Maximum Queue Entries: 1024 00:36:52.231 Contiguous Queues Required: No 00:36:52.231 Arbitration Mechanisms Supported 00:36:52.231 Weighted Round Robin: Not Supported 00:36:52.231 Vendor Specific: Not Supported 00:36:52.231 Reset Timeout: 7500 ms 00:36:52.231 Doorbell Stride: 4 bytes 00:36:52.231 NVM Subsystem Reset: Not Supported 00:36:52.231 Command Sets Supported 00:36:52.231 NVM Command Set: Supported 00:36:52.231 Boot Partition: Not Supported 00:36:52.231 Memory Page Size Minimum: 4096 bytes 00:36:52.231 Memory Page Size Maximum: 4096 bytes 00:36:52.231 Persistent Memory Region: Not Supported 00:36:52.231 Optional Asynchronous Events Supported 00:36:52.231 Namespace Attribute Notices: Not Supported 00:36:52.231 Firmware Activation Notices: Not Supported 00:36:52.231 ANA Change Notices: Not Supported 00:36:52.231 PLE Aggregate Log Change Notices: Not Supported 00:36:52.231 LBA Status Info Alert Notices: Not Supported 00:36:52.231 EGE Aggregate Log Change Notices: Not Supported 00:36:52.231 Normal NVM Subsystem Shutdown event: Not Supported 00:36:52.231 Zone Descriptor Change Notices: Not Supported 00:36:52.231 Discovery Log Change Notices: Supported 00:36:52.231 Controller Attributes 00:36:52.231 128-bit Host Identifier: Not Supported 00:36:52.231 Non-Operational Permissive Mode: Not Supported 00:36:52.231 NVM Sets: Not Supported 00:36:52.231 Read Recovery Levels: Not Supported 00:36:52.231 Endurance Groups: Not Supported 00:36:52.231 Predictable Latency Mode: Not Supported 00:36:52.231 Traffic Based Keep ALive: Not Supported 00:36:52.231 Namespace Granularity: Not Supported 00:36:52.231 SQ Associations: Not Supported 00:36:52.231 UUID List: Not Supported 00:36:52.231 Multi-Domain Subsystem: Not Supported 00:36:52.231 Fixed Capacity Management: Not Supported 00:36:52.231 Variable Capacity Management: Not Supported 00:36:52.231 Delete Endurance Group: Not Supported 00:36:52.231 Delete NVM Set: Not Supported 00:36:52.231 Extended LBA Formats Supported: Not Supported 00:36:52.231 Flexible Data Placement Supported: Not Supported 00:36:52.231 00:36:52.231 Controller Memory Buffer Support 00:36:52.231 ================================ 00:36:52.231 Supported: No 00:36:52.231 00:36:52.231 Persistent Memory Region Support 00:36:52.231 ================================ 00:36:52.231 Supported: No 00:36:52.231 00:36:52.231 Admin Command Set Attributes 00:36:52.231 ============================ 00:36:52.231 Security Send/Receive: Not Supported 00:36:52.231 Format NVM: Not Supported 00:36:52.231 Firmware Activate/Download: Not Supported 00:36:52.231 Namespace Management: Not Supported 00:36:52.231 Device Self-Test: Not Supported 00:36:52.231 Directives: Not Supported 00:36:52.231 NVMe-MI: Not Supported 00:36:52.231 Virtualization Management: Not Supported 00:36:52.231 Doorbell Buffer Config: Not Supported 00:36:52.231 Get LBA Status Capability: Not Supported 00:36:52.231 Command & Feature Lockdown Capability: Not Supported 00:36:52.231 Abort Command Limit: 1 00:36:52.231 Async Event Request Limit: 1 00:36:52.231 Number of Firmware Slots: N/A 00:36:52.231 Firmware Slot 1 Read-Only: N/A 00:36:52.231 Firmware Activation Without Reset: N/A 00:36:52.231 Multiple Update Detection Support: N/A 00:36:52.231 Firmware Update Granularity: No Information Provided 00:36:52.231 Per-Namespace SMART Log: No 00:36:52.231 Asymmetric Namespace Access Log Page: Not Supported 00:36:52.231 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:36:52.231 Command Effects Log Page: Not Supported 00:36:52.231 Get Log Page Extended Data: Supported 00:36:52.231 Telemetry Log Pages: Not Supported 00:36:52.231 Persistent Event Log Pages: Not Supported 00:36:52.231 Supported Log Pages Log Page: May Support 00:36:52.231 Commands Supported & Effects Log Page: Not Supported 00:36:52.231 Feature Identifiers & Effects Log Page:May Support 00:36:52.231 NVMe-MI Commands & Effects Log Page: May Support 00:36:52.231 Data Area 4 for Telemetry Log: Not Supported 00:36:52.231 Error Log Page Entries Supported: 1 00:36:52.231 Keep Alive: Not Supported 00:36:52.231 00:36:52.231 NVM Command Set Attributes 00:36:52.231 ========================== 00:36:52.231 Submission Queue Entry Size 00:36:52.231 Max: 1 00:36:52.231 Min: 1 00:36:52.231 Completion Queue Entry Size 00:36:52.231 Max: 1 00:36:52.231 Min: 1 00:36:52.231 Number of Namespaces: 0 00:36:52.231 Compare Command: Not Supported 00:36:52.231 Write Uncorrectable Command: Not Supported 00:36:52.231 Dataset Management Command: Not Supported 00:36:52.231 Write Zeroes Command: Not Supported 00:36:52.231 Set Features Save Field: Not Supported 00:36:52.231 Reservations: Not Supported 00:36:52.231 Timestamp: Not Supported 00:36:52.231 Copy: Not Supported 00:36:52.231 Volatile Write Cache: Not Present 00:36:52.231 Atomic Write Unit (Normal): 1 00:36:52.231 Atomic Write Unit (PFail): 1 00:36:52.231 Atomic Compare & Write Unit: 1 00:36:52.231 Fused Compare & Write: Not Supported 00:36:52.231 Scatter-Gather List 00:36:52.231 SGL Command Set: Supported 00:36:52.231 SGL Keyed: Not Supported 00:36:52.231 SGL Bit Bucket Descriptor: Not Supported 00:36:52.231 SGL Metadata Pointer: Not Supported 00:36:52.231 Oversized SGL: Not Supported 00:36:52.231 SGL Metadata Address: Not Supported 00:36:52.231 SGL Offset: Supported 00:36:52.231 Transport SGL Data Block: Not Supported 00:36:52.231 Replay Protected Memory Block: Not Supported 00:36:52.231 00:36:52.231 Firmware Slot Information 00:36:52.231 ========================= 00:36:52.231 Active slot: 0 00:36:52.231 00:36:52.231 00:36:52.231 Error Log 00:36:52.231 ========= 00:36:52.231 00:36:52.231 Active Namespaces 00:36:52.231 ================= 00:36:52.231 Discovery Log Page 00:36:52.231 ================== 00:36:52.231 Generation Counter: 2 00:36:52.231 Number of Records: 2 00:36:52.231 Record Format: 0 00:36:52.231 00:36:52.231 Discovery Log Entry 0 00:36:52.231 ---------------------- 00:36:52.231 Transport Type: 3 (TCP) 00:36:52.231 Address Family: 1 (IPv4) 00:36:52.231 Subsystem Type: 3 (Current Discovery Subsystem) 00:36:52.231 Entry Flags: 00:36:52.231 Duplicate Returned Information: 0 00:36:52.231 Explicit Persistent Connection Support for Discovery: 0 00:36:52.231 Transport Requirements: 00:36:52.231 Secure Channel: Not Specified 00:36:52.231 Port ID: 1 (0x0001) 00:36:52.231 Controller ID: 65535 (0xffff) 00:36:52.231 Admin Max SQ Size: 32 00:36:52.231 Transport Service Identifier: 4420 00:36:52.231 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:36:52.231 Transport Address: 10.0.0.1 00:36:52.231 Discovery Log Entry 1 00:36:52.231 ---------------------- 00:36:52.231 Transport Type: 3 (TCP) 00:36:52.231 Address Family: 1 (IPv4) 00:36:52.231 Subsystem Type: 2 (NVM Subsystem) 00:36:52.231 Entry Flags: 00:36:52.231 Duplicate Returned Information: 0 00:36:52.231 Explicit Persistent Connection Support for Discovery: 0 00:36:52.231 Transport Requirements: 00:36:52.231 Secure Channel: Not Specified 00:36:52.231 Port ID: 1 (0x0001) 00:36:52.231 Controller ID: 65535 (0xffff) 00:36:52.231 Admin Max SQ Size: 32 00:36:52.231 Transport Service Identifier: 4420 00:36:52.231 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:36:52.231 Transport Address: 10.0.0.1 00:36:52.231 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:52.231 get_feature(0x01) failed 00:36:52.231 get_feature(0x02) failed 00:36:52.231 get_feature(0x04) failed 00:36:52.231 ===================================================== 00:36:52.231 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:52.231 ===================================================== 00:36:52.232 Controller Capabilities/Features 00:36:52.232 ================================ 00:36:52.232 Vendor ID: 0000 00:36:52.232 Subsystem Vendor ID: 0000 00:36:52.232 Serial Number: 218f808fbdd6e31c7fe5 00:36:52.232 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:36:52.232 Firmware Version: 6.8.9-20 00:36:52.232 Recommended Arb Burst: 6 00:36:52.232 IEEE OUI Identifier: 00 00 00 00:36:52.232 Multi-path I/O 00:36:52.232 May have multiple subsystem ports: Yes 00:36:52.232 May have multiple controllers: Yes 00:36:52.232 Associated with SR-IOV VF: No 00:36:52.232 Max Data Transfer Size: Unlimited 00:36:52.232 Max Number of Namespaces: 1024 00:36:52.232 Max Number of I/O Queues: 128 00:36:52.232 NVMe Specification Version (VS): 1.3 00:36:52.232 NVMe Specification Version (Identify): 1.3 00:36:52.232 Maximum Queue Entries: 1024 00:36:52.232 Contiguous Queues Required: No 00:36:52.232 Arbitration Mechanisms Supported 00:36:52.232 Weighted Round Robin: Not Supported 00:36:52.232 Vendor Specific: Not Supported 00:36:52.232 Reset Timeout: 7500 ms 00:36:52.232 Doorbell Stride: 4 bytes 00:36:52.232 NVM Subsystem Reset: Not Supported 00:36:52.232 Command Sets Supported 00:36:52.232 NVM Command Set: Supported 00:36:52.232 Boot Partition: Not Supported 00:36:52.232 Memory Page Size Minimum: 4096 bytes 00:36:52.232 Memory Page Size Maximum: 4096 bytes 00:36:52.232 Persistent Memory Region: Not Supported 00:36:52.232 Optional Asynchronous Events Supported 00:36:52.232 Namespace Attribute Notices: Supported 00:36:52.232 Firmware Activation Notices: Not Supported 00:36:52.232 ANA Change Notices: Supported 00:36:52.232 PLE Aggregate Log Change Notices: Not Supported 00:36:52.232 LBA Status Info Alert Notices: Not Supported 00:36:52.232 EGE Aggregate Log Change Notices: Not Supported 00:36:52.232 Normal NVM Subsystem Shutdown event: Not Supported 00:36:52.232 Zone Descriptor Change Notices: Not Supported 00:36:52.232 Discovery Log Change Notices: Not Supported 00:36:52.232 Controller Attributes 00:36:52.232 128-bit Host Identifier: Supported 00:36:52.232 Non-Operational Permissive Mode: Not Supported 00:36:52.232 NVM Sets: Not Supported 00:36:52.232 Read Recovery Levels: Not Supported 00:36:52.232 Endurance Groups: Not Supported 00:36:52.232 Predictable Latency Mode: Not Supported 00:36:52.232 Traffic Based Keep ALive: Supported 00:36:52.232 Namespace Granularity: Not Supported 00:36:52.232 SQ Associations: Not Supported 00:36:52.232 UUID List: Not Supported 00:36:52.232 Multi-Domain Subsystem: Not Supported 00:36:52.232 Fixed Capacity Management: Not Supported 00:36:52.232 Variable Capacity Management: Not Supported 00:36:52.232 Delete Endurance Group: Not Supported 00:36:52.232 Delete NVM Set: Not Supported 00:36:52.232 Extended LBA Formats Supported: Not Supported 00:36:52.232 Flexible Data Placement Supported: Not Supported 00:36:52.232 00:36:52.232 Controller Memory Buffer Support 00:36:52.232 ================================ 00:36:52.232 Supported: No 00:36:52.232 00:36:52.232 Persistent Memory Region Support 00:36:52.232 ================================ 00:36:52.232 Supported: No 00:36:52.232 00:36:52.232 Admin Command Set Attributes 00:36:52.232 ============================ 00:36:52.232 Security Send/Receive: Not Supported 00:36:52.232 Format NVM: Not Supported 00:36:52.232 Firmware Activate/Download: Not Supported 00:36:52.232 Namespace Management: Not Supported 00:36:52.232 Device Self-Test: Not Supported 00:36:52.232 Directives: Not Supported 00:36:52.232 NVMe-MI: Not Supported 00:36:52.232 Virtualization Management: Not Supported 00:36:52.232 Doorbell Buffer Config: Not Supported 00:36:52.232 Get LBA Status Capability: Not Supported 00:36:52.232 Command & Feature Lockdown Capability: Not Supported 00:36:52.232 Abort Command Limit: 4 00:36:52.232 Async Event Request Limit: 4 00:36:52.232 Number of Firmware Slots: N/A 00:36:52.232 Firmware Slot 1 Read-Only: N/A 00:36:52.232 Firmware Activation Without Reset: N/A 00:36:52.232 Multiple Update Detection Support: N/A 00:36:52.232 Firmware Update Granularity: No Information Provided 00:36:52.232 Per-Namespace SMART Log: Yes 00:36:52.232 Asymmetric Namespace Access Log Page: Supported 00:36:52.232 ANA Transition Time : 10 sec 00:36:52.232 00:36:52.232 Asymmetric Namespace Access Capabilities 00:36:52.232 ANA Optimized State : Supported 00:36:52.232 ANA Non-Optimized State : Supported 00:36:52.232 ANA Inaccessible State : Supported 00:36:52.232 ANA Persistent Loss State : Supported 00:36:52.232 ANA Change State : Supported 00:36:52.232 ANAGRPID is not changed : No 00:36:52.232 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:36:52.232 00:36:52.232 ANA Group Identifier Maximum : 128 00:36:52.232 Number of ANA Group Identifiers : 128 00:36:52.232 Max Number of Allowed Namespaces : 1024 00:36:52.232 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:36:52.232 Command Effects Log Page: Supported 00:36:52.232 Get Log Page Extended Data: Supported 00:36:52.232 Telemetry Log Pages: Not Supported 00:36:52.232 Persistent Event Log Pages: Not Supported 00:36:52.232 Supported Log Pages Log Page: May Support 00:36:52.232 Commands Supported & Effects Log Page: Not Supported 00:36:52.232 Feature Identifiers & Effects Log Page:May Support 00:36:52.232 NVMe-MI Commands & Effects Log Page: May Support 00:36:52.232 Data Area 4 for Telemetry Log: Not Supported 00:36:52.232 Error Log Page Entries Supported: 128 00:36:52.232 Keep Alive: Supported 00:36:52.232 Keep Alive Granularity: 1000 ms 00:36:52.232 00:36:52.232 NVM Command Set Attributes 00:36:52.232 ========================== 00:36:52.232 Submission Queue Entry Size 00:36:52.232 Max: 64 00:36:52.232 Min: 64 00:36:52.232 Completion Queue Entry Size 00:36:52.232 Max: 16 00:36:52.232 Min: 16 00:36:52.232 Number of Namespaces: 1024 00:36:52.232 Compare Command: Not Supported 00:36:52.232 Write Uncorrectable Command: Not Supported 00:36:52.232 Dataset Management Command: Supported 00:36:52.232 Write Zeroes Command: Supported 00:36:52.232 Set Features Save Field: Not Supported 00:36:52.232 Reservations: Not Supported 00:36:52.232 Timestamp: Not Supported 00:36:52.232 Copy: Not Supported 00:36:52.232 Volatile Write Cache: Present 00:36:52.232 Atomic Write Unit (Normal): 1 00:36:52.232 Atomic Write Unit (PFail): 1 00:36:52.232 Atomic Compare & Write Unit: 1 00:36:52.232 Fused Compare & Write: Not Supported 00:36:52.232 Scatter-Gather List 00:36:52.232 SGL Command Set: Supported 00:36:52.232 SGL Keyed: Not Supported 00:36:52.232 SGL Bit Bucket Descriptor: Not Supported 00:36:52.232 SGL Metadata Pointer: Not Supported 00:36:52.232 Oversized SGL: Not Supported 00:36:52.232 SGL Metadata Address: Not Supported 00:36:52.232 SGL Offset: Supported 00:36:52.232 Transport SGL Data Block: Not Supported 00:36:52.232 Replay Protected Memory Block: Not Supported 00:36:52.232 00:36:52.232 Firmware Slot Information 00:36:52.232 ========================= 00:36:52.232 Active slot: 0 00:36:52.232 00:36:52.232 Asymmetric Namespace Access 00:36:52.232 =========================== 00:36:52.232 Change Count : 0 00:36:52.232 Number of ANA Group Descriptors : 1 00:36:52.232 ANA Group Descriptor : 0 00:36:52.232 ANA Group ID : 1 00:36:52.232 Number of NSID Values : 1 00:36:52.232 Change Count : 0 00:36:52.232 ANA State : 1 00:36:52.232 Namespace Identifier : 1 00:36:52.232 00:36:52.232 Commands Supported and Effects 00:36:52.232 ============================== 00:36:52.232 Admin Commands 00:36:52.232 -------------- 00:36:52.232 Get Log Page (02h): Supported 00:36:52.232 Identify (06h): Supported 00:36:52.232 Abort (08h): Supported 00:36:52.232 Set Features (09h): Supported 00:36:52.232 Get Features (0Ah): Supported 00:36:52.232 Asynchronous Event Request (0Ch): Supported 00:36:52.232 Keep Alive (18h): Supported 00:36:52.232 I/O Commands 00:36:52.232 ------------ 00:36:52.232 Flush (00h): Supported 00:36:52.232 Write (01h): Supported LBA-Change 00:36:52.232 Read (02h): Supported 00:36:52.232 Write Zeroes (08h): Supported LBA-Change 00:36:52.232 Dataset Management (09h): Supported 00:36:52.232 00:36:52.232 Error Log 00:36:52.232 ========= 00:36:52.232 Entry: 0 00:36:52.232 Error Count: 0x3 00:36:52.232 Submission Queue Id: 0x0 00:36:52.232 Command Id: 0x5 00:36:52.232 Phase Bit: 0 00:36:52.232 Status Code: 0x2 00:36:52.232 Status Code Type: 0x0 00:36:52.232 Do Not Retry: 1 00:36:52.494 Error Location: 0x28 00:36:52.494 LBA: 0x0 00:36:52.494 Namespace: 0x0 00:36:52.494 Vendor Log Page: 0x0 00:36:52.494 ----------- 00:36:52.494 Entry: 1 00:36:52.494 Error Count: 0x2 00:36:52.494 Submission Queue Id: 0x0 00:36:52.494 Command Id: 0x5 00:36:52.494 Phase Bit: 0 00:36:52.494 Status Code: 0x2 00:36:52.494 Status Code Type: 0x0 00:36:52.494 Do Not Retry: 1 00:36:52.494 Error Location: 0x28 00:36:52.494 LBA: 0x0 00:36:52.494 Namespace: 0x0 00:36:52.494 Vendor Log Page: 0x0 00:36:52.494 ----------- 00:36:52.494 Entry: 2 00:36:52.494 Error Count: 0x1 00:36:52.494 Submission Queue Id: 0x0 00:36:52.494 Command Id: 0x4 00:36:52.494 Phase Bit: 0 00:36:52.494 Status Code: 0x2 00:36:52.494 Status Code Type: 0x0 00:36:52.494 Do Not Retry: 1 00:36:52.494 Error Location: 0x28 00:36:52.494 LBA: 0x0 00:36:52.494 Namespace: 0x0 00:36:52.494 Vendor Log Page: 0x0 00:36:52.494 00:36:52.494 Number of Queues 00:36:52.494 ================ 00:36:52.494 Number of I/O Submission Queues: 128 00:36:52.494 Number of I/O Completion Queues: 128 00:36:52.494 00:36:52.494 ZNS Specific Controller Data 00:36:52.494 ============================ 00:36:52.494 Zone Append Size Limit: 0 00:36:52.494 00:36:52.494 00:36:52.494 Active Namespaces 00:36:52.494 ================= 00:36:52.494 get_feature(0x05) failed 00:36:52.494 Namespace ID:1 00:36:52.494 Command Set Identifier: NVM (00h) 00:36:52.494 Deallocate: Supported 00:36:52.494 Deallocated/Unwritten Error: Not Supported 00:36:52.494 Deallocated Read Value: Unknown 00:36:52.494 Deallocate in Write Zeroes: Not Supported 00:36:52.494 Deallocated Guard Field: 0xFFFF 00:36:52.494 Flush: Supported 00:36:52.494 Reservation: Not Supported 00:36:52.494 Namespace Sharing Capabilities: Multiple Controllers 00:36:52.494 Size (in LBAs): 3750748848 (1788GiB) 00:36:52.494 Capacity (in LBAs): 3750748848 (1788GiB) 00:36:52.494 Utilization (in LBAs): 3750748848 (1788GiB) 00:36:52.494 UUID: e6cd3dfb-c94e-4b49-adb0-6149fba7d224 00:36:52.494 Thin Provisioning: Not Supported 00:36:52.494 Per-NS Atomic Units: Yes 00:36:52.494 Atomic Write Unit (Normal): 8 00:36:52.494 Atomic Write Unit (PFail): 8 00:36:52.494 Preferred Write Granularity: 8 00:36:52.494 Atomic Compare & Write Unit: 8 00:36:52.494 Atomic Boundary Size (Normal): 0 00:36:52.494 Atomic Boundary Size (PFail): 0 00:36:52.494 Atomic Boundary Offset: 0 00:36:52.494 NGUID/EUI64 Never Reused: No 00:36:52.494 ANA group ID: 1 00:36:52.494 Namespace Write Protected: No 00:36:52.494 Number of LBA Formats: 1 00:36:52.494 Current LBA Format: LBA Format #00 00:36:52.494 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:52.494 00:36:52.494 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:36:52.494 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:52.494 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:36:52.494 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:52.494 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:36:52.494 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:52.494 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:52.494 rmmod nvme_tcp 00:36:52.494 rmmod nvme_fabrics 00:36:52.494 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:52.494 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:36:52.494 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:36:52.494 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:36:52.494 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:52.494 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:52.494 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:52.494 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:36:52.494 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:36:52.494 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:52.494 14:32:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:36:52.494 14:32:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:52.494 14:32:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:52.494 14:32:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:52.494 14:32:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:52.494 14:32:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:54.408 14:32:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:54.408 14:32:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:36:54.408 14:32:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:54.408 14:32:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:36:54.408 14:32:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:54.408 14:32:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:54.408 14:32:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:54.408 14:32:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:54.408 14:32:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:36:54.408 14:32:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:36:54.670 14:32:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:58.882 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:58.882 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:58.882 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:58.882 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:58.882 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:58.882 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:58.882 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:58.882 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:58.882 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:58.882 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:58.882 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:58.882 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:58.882 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:58.882 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:58.882 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:58.882 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:58.882 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:58.882 00:36:58.882 real 0m19.924s 00:36:58.882 user 0m5.460s 00:36:58.882 sys 0m11.305s 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:58.882 ************************************ 00:36:58.882 END TEST nvmf_identify_kernel_target 00:36:58.882 ************************************ 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.882 ************************************ 00:36:58.882 START TEST nvmf_auth_host 00:36:58.882 ************************************ 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:58.882 * Looking for test storage... 00:36:58.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:36:58.882 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:59.144 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:36:59.144 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:36:59.144 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:36:59.144 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:36:59.144 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:59.144 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:36:59.144 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:36:59.144 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:59.144 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:59.144 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:36:59.144 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:59.144 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:59.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.144 --rc genhtml_branch_coverage=1 00:36:59.144 --rc genhtml_function_coverage=1 00:36:59.144 --rc genhtml_legend=1 00:36:59.144 --rc geninfo_all_blocks=1 00:36:59.144 --rc geninfo_unexecuted_blocks=1 00:36:59.144 00:36:59.144 ' 00:36:59.144 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:59.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.144 --rc genhtml_branch_coverage=1 00:36:59.144 --rc genhtml_function_coverage=1 00:36:59.144 --rc genhtml_legend=1 00:36:59.144 --rc geninfo_all_blocks=1 00:36:59.144 --rc geninfo_unexecuted_blocks=1 00:36:59.144 00:36:59.144 ' 00:36:59.144 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:59.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.144 --rc genhtml_branch_coverage=1 00:36:59.144 --rc genhtml_function_coverage=1 00:36:59.144 --rc genhtml_legend=1 00:36:59.144 --rc geninfo_all_blocks=1 00:36:59.144 --rc geninfo_unexecuted_blocks=1 00:36:59.144 00:36:59.144 ' 00:36:59.144 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:59.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.144 --rc genhtml_branch_coverage=1 00:36:59.144 --rc genhtml_function_coverage=1 00:36:59.144 --rc genhtml_legend=1 00:36:59.144 --rc geninfo_all_blocks=1 00:36:59.144 --rc geninfo_unexecuted_blocks=1 00:36:59.144 00:36:59.144 ' 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:59.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:36:59.145 14:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:07.293 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:07.293 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:07.293 Found net devices under 0000:31:00.0: cvl_0_0 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:07.293 Found net devices under 0000:31:00.1: cvl_0_1 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:07.293 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:07.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:07.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:37:07.294 00:37:07.294 --- 10.0.0.2 ping statistics --- 00:37:07.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:07.294 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:07.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:07.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:37:07.294 00:37:07.294 --- 10.0.0.1 ping statistics --- 00:37:07.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:07.294 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=1954702 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 1954702 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1954702 ']' 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:07.294 14:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.556 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:07.556 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:37:07.556 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:07.556 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:07.556 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.817 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:07.817 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:37:07.817 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:37:07.817 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:37:07.817 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=782fd7936639a50092aeeeb57ce90533 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.2Aw 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 782fd7936639a50092aeeeb57ce90533 0 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 782fd7936639a50092aeeeb57ce90533 0 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=782fd7936639a50092aeeeb57ce90533 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.2Aw 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.2Aw 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.2Aw 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=45d53e8d9bf9db0ee356fabc593787f0cab256ff5f50dfc8e26a3651e812e00d 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.4rP 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 45d53e8d9bf9db0ee356fabc593787f0cab256ff5f50dfc8e26a3651e812e00d 3 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 45d53e8d9bf9db0ee356fabc593787f0cab256ff5f50dfc8e26a3651e812e00d 3 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=45d53e8d9bf9db0ee356fabc593787f0cab256ff5f50dfc8e26a3651e812e00d 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.4rP 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.4rP 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.4rP 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=3bc2d0de623f1486abd65edbd61c8162d14594a727c5dca7 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.63E 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 3bc2d0de623f1486abd65edbd61c8162d14594a727c5dca7 0 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 3bc2d0de623f1486abd65edbd61c8162d14594a727c5dca7 0 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=3bc2d0de623f1486abd65edbd61c8162d14594a727c5dca7 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.63E 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.63E 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.63E 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b95a54c85982a35a438baa0cd333b6224c4b411b4982431c 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.MEQ 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b95a54c85982a35a438baa0cd333b6224c4b411b4982431c 2 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b95a54c85982a35a438baa0cd333b6224c4b411b4982431c 2 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b95a54c85982a35a438baa0cd333b6224c4b411b4982431c 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:37:07.818 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.MEQ 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.MEQ 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.MEQ 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=fb05f1270d1c73878019184ea2fad77f 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.FvX 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key fb05f1270d1c73878019184ea2fad77f 1 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 fb05f1270d1c73878019184ea2fad77f 1 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=fb05f1270d1c73878019184ea2fad77f 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.FvX 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.FvX 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.FvX 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b45e7465efeb3de176b484411e400d40 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.sHd 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b45e7465efeb3de176b484411e400d40 1 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b45e7465efeb3de176b484411e400d40 1 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b45e7465efeb3de176b484411e400d40 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.sHd 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.sHd 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.sHd 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=2c3c9f589a394c0814488f991d64d9ede58ff961ad45caed 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.NiY 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 2c3c9f589a394c0814488f991d64d9ede58ff961ad45caed 2 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 2c3c9f589a394c0814488f991d64d9ede58ff961ad45caed 2 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=2c3c9f589a394c0814488f991d64d9ede58ff961ad45caed 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.NiY 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.NiY 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.NiY 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:37:08.080 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b7749208706ee4ad583e527c542a00f8 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.5zn 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b7749208706ee4ad583e527c542a00f8 0 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b7749208706ee4ad583e527c542a00f8 0 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b7749208706ee4ad583e527c542a00f8 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.5zn 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.5zn 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.5zn 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b4b6faa09fa67e17df15b9748c8c42fe9b81887b6ceec42ce35ea8acb2f34ba5 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.7WW 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b4b6faa09fa67e17df15b9748c8c42fe9b81887b6ceec42ce35ea8acb2f34ba5 3 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b4b6faa09fa67e17df15b9748c8c42fe9b81887b6ceec42ce35ea8acb2f34ba5 3 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:37:08.342 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:37:08.343 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b4b6faa09fa67e17df15b9748c8c42fe9b81887b6ceec42ce35ea8acb2f34ba5 00:37:08.343 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:37:08.343 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:37:08.343 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.7WW 00:37:08.343 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.7WW 00:37:08.343 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.7WW 00:37:08.343 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:37:08.343 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1954702 00:37:08.343 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1954702 ']' 00:37:08.343 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:08.343 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:08.343 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:08.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:08.343 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:08.343 14:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.2Aw 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.4rP ]] 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4rP 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.63E 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.MEQ ]] 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MEQ 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.FvX 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.sHd ]] 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sHd 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.NiY 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.604 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.5zn ]] 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.5zn 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.7WW 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:08.605 14:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:12.825 Waiting for block devices as requested 00:37:12.825 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:12.825 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:12.825 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:12.825 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:12.825 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:12.825 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:12.825 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:12.825 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:12.825 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:13.087 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:13.087 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:13.087 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:13.348 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:13.348 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:13.348 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:13.348 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:13.608 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:14.553 14:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:37:14.554 14:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:14.554 14:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:37:14.554 14:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:14.554 14:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:14.554 14:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:14.554 14:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:37:14.554 14:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:14.554 14:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:14.554 No valid GPT data, bailing 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:37:14.554 00:37:14.554 Discovery Log Number of Records 2, Generation counter 2 00:37:14.554 =====Discovery Log Entry 0====== 00:37:14.554 trtype: tcp 00:37:14.554 adrfam: ipv4 00:37:14.554 subtype: current discovery subsystem 00:37:14.554 treq: not specified, sq flow control disable supported 00:37:14.554 portid: 1 00:37:14.554 trsvcid: 4420 00:37:14.554 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:14.554 traddr: 10.0.0.1 00:37:14.554 eflags: none 00:37:14.554 sectype: none 00:37:14.554 =====Discovery Log Entry 1====== 00:37:14.554 trtype: tcp 00:37:14.554 adrfam: ipv4 00:37:14.554 subtype: nvme subsystem 00:37:14.554 treq: not specified, sq flow control disable supported 00:37:14.554 portid: 1 00:37:14.554 trsvcid: 4420 00:37:14.554 subnqn: nqn.2024-02.io.spdk:cnode0 00:37:14.554 traddr: 10.0.0.1 00:37:14.554 eflags: none 00:37:14.554 sectype: none 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: ]] 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.554 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.815 nvme0n1 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: ]] 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:14.815 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:14.816 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:14.816 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:14.816 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:14.816 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:14.816 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:14.816 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:14.816 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:14.816 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:14.816 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:14.816 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:14.816 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.816 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.076 nvme0n1 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:15.076 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: ]] 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.077 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.338 nvme0n1 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: ]] 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.338 14:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.338 nvme0n1 00:37:15.338 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.338 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:15.338 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:15.338 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.338 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.338 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: ]] 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.599 nvme0n1 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.599 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.860 nvme0n1 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: ]] 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:15.860 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.861 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.122 nvme0n1 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: ]] 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.122 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.383 nvme0n1 00:37:16.383 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.383 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:16.383 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:16.383 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.383 14:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: ]] 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:37:16.383 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.384 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.644 nvme0n1 00:37:16.644 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.644 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:16.644 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:16.644 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.644 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.644 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.644 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:16.644 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:16.644 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.644 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.644 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.644 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:16.644 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:37:16.644 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:16.644 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:16.644 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:16.644 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:16.644 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: ]] 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.645 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.905 nvme0n1 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:16.905 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.906 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.166 nvme0n1 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: ]] 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:17.166 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:17.167 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:17.167 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:17.167 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:17.167 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.167 14:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.427 nvme0n1 00:37:17.427 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.427 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:17.427 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:17.427 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.427 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.427 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: ]] 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.688 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.950 nvme0n1 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: ]] 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.950 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.211 nvme0n1 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: ]] 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.211 14:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.471 nvme0n1 00:37:18.471 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.471 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:18.472 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:18.472 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.472 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.472 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.732 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:18.733 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:18.733 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:18.733 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:18.733 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:18.733 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:18.733 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:18.733 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:18.733 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:18.733 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:18.733 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:18.733 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:18.733 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.733 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.993 nvme0n1 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: ]] 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.993 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:18.994 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.994 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:18.994 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:18.994 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:18.994 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:18.994 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:18.994 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:18.994 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:18.994 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:18.994 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:18.994 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:18.994 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:18.994 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:18.994 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.994 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.565 nvme0n1 00:37:19.565 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.565 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:19.565 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.565 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:19.565 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.565 14:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: ]] 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.565 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.826 nvme0n1 00:37:19.826 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.826 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:19.826 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:19.826 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.826 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:19.826 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.826 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:19.826 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:19.826 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.826 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: ]] 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.086 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.347 nvme0n1 00:37:20.347 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.347 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:20.347 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:20.347 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.347 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.347 14:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: ]] 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:20.347 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:20.607 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:20.607 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:20.607 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:20.607 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:20.607 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:20.607 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:20.607 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:20.607 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:20.607 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.607 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.867 nvme0n1 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:20.867 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.868 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.439 nvme0n1 00:37:21.439 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.439 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:21.439 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:21.439 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.439 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.439 14:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: ]] 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.439 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.011 nvme0n1 00:37:22.011 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.011 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:22.011 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:22.011 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.011 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.011 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: ]] 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.271 14:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.842 nvme0n1 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: ]] 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.842 14:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.507 nvme0n1 00:37:23.507 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.507 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:23.507 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.507 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.507 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:23.507 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.507 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:23.507 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:23.507 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.507 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.507 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.507 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:23.507 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: ]] 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.802 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.386 nvme0n1 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.386 14:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.956 nvme0n1 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: ]] 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.956 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.218 nvme0n1 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.218 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: ]] 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.219 14:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.480 nvme0n1 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:25.480 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: ]] 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.481 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.742 nvme0n1 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:25.742 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: ]] 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.743 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.004 nvme0n1 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.004 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.266 nvme0n1 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: ]] 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:26.266 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.267 14:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.527 nvme0n1 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: ]] 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.527 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.788 nvme0n1 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: ]] 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.788 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.049 nvme0n1 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: ]] 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:27.049 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:27.050 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.050 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.050 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.050 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:27.050 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:27.050 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:27.050 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:27.050 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:27.050 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:27.050 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:27.050 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:27.050 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:27.050 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:27.050 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:27.050 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:27.050 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.050 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.311 nvme0n1 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.311 14:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.573 nvme0n1 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: ]] 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:27.573 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.574 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.574 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.574 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:27.574 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:27.574 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:27.574 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:27.574 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:27.574 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:27.574 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:27.574 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:27.574 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:27.574 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:27.574 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:27.574 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:27.574 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.574 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.835 nvme0n1 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: ]] 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.835 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.095 nvme0n1 00:37:28.095 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.095 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:28.095 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:28.095 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.095 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.095 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.095 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:28.095 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:28.095 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.095 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: ]] 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.356 14:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.617 nvme0n1 00:37:28.617 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: ]] 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.618 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.880 nvme0n1 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.880 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.142 nvme0n1 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: ]] 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.142 14:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.713 nvme0n1 00:37:29.713 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.713 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:29.713 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:29.713 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.713 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.713 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.713 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:29.713 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:29.713 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.713 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.713 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: ]] 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.714 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.285 nvme0n1 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: ]] 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.285 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.286 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.286 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:30.286 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:30.286 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:30.286 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:30.286 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:30.286 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:30.286 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:30.286 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:30.286 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:30.286 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:30.286 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:30.286 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:30.286 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.286 14:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.547 nvme0n1 00:37:30.547 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.807 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:30.807 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:30.807 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: ]] 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.808 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.069 nvme0n1 00:37:31.069 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.069 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:31.069 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:31.069 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.069 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.069 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.330 14:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.591 nvme0n1 00:37:31.591 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.591 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:31.591 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:31.591 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.591 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.591 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.591 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:31.591 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:31.591 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.591 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: ]] 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.852 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.424 nvme0n1 00:37:32.424 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.424 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:32.424 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:32.424 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.424 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.424 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.424 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:32.424 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:32.424 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.424 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.424 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.424 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:32.424 14:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: ]] 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.424 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.995 nvme0n1 00:37:32.995 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.995 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:32.995 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:32.995 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.995 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:32.995 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: ]] 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.256 14:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.828 nvme0n1 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: ]] 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.828 14:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.779 nvme0n1 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.779 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.351 nvme0n1 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: ]] 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.351 14:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.613 nvme0n1 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: ]] 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:35.613 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:35.614 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:35.614 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:35.614 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.614 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.614 nvme0n1 00:37:35.614 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.614 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:35.614 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:35.614 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.614 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.614 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: ]] 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.876 nvme0n1 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.876 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: ]] 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.138 nvme0n1 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:36.138 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:36.139 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:36.139 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:36.139 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:36.139 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:36.139 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:36.139 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:36.139 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:37:36.139 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:36.139 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:36.139 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:36.139 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:36.139 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:36.139 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:37:36.139 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.139 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.400 nvme0n1 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.400 14:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.400 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.400 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:36.400 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:36.400 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.400 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.400 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.400 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: ]] 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.401 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.662 nvme0n1 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: ]] 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:36.662 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:36.663 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:36.663 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:36.663 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:36.663 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:36.663 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:36.663 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:36.663 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:36.663 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.663 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.923 nvme0n1 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: ]] 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:36.923 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.924 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.185 nvme0n1 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: ]] 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:37.185 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:37.186 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:37.186 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.186 14:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.447 nvme0n1 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.447 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.708 nvme0n1 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: ]] 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:37.708 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:37.709 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.709 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.970 nvme0n1 00:37:37.970 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.970 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:37.970 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:37.970 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.970 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:37.970 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: ]] 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.230 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.492 nvme0n1 00:37:38.492 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.492 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.492 14:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: ]] 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:38.492 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:38.493 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:38.493 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:38.493 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.493 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.493 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.493 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:38.493 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:38.493 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:38.493 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:38.493 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:38.493 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:38.493 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:38.493 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:38.493 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:38.493 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:38.493 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:38.493 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:38.493 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.493 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.754 nvme0n1 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: ]] 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.754 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.014 nvme0n1 00:37:39.014 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.014 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:39.014 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:39.014 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.014 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.014 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.275 14:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.537 nvme0n1 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: ]] 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.537 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.109 nvme0n1 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: ]] 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.109 14:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.370 nvme0n1 00:37:40.370 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.370 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:40.370 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:40.370 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.370 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.370 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.370 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:40.370 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:40.370 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.370 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: ]] 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.631 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.892 nvme0n1 00:37:40.892 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.892 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:40.892 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:40.892 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.892 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: ]] 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:40.893 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:41.154 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:41.154 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:41.154 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:41.154 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:41.154 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:41.154 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:41.154 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:41.154 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.154 14:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.414 nvme0n1 00:37:41.414 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.414 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:41.414 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:41.414 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.414 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.414 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.414 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:41.414 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:41.414 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.414 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.414 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.414 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:41.414 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:37:41.414 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.415 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.986 nvme0n1 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgyZmQ3OTM2NjM5YTUwMDkyYWVlZWI1N2NlOTA1MzPTX1F/: 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: ]] 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDVkNTNlOGQ5YmY5ZGIwZWUzNTZmYWJjNTkzNzg3ZjBjYWIyNTZmZjVmNTBkZmM4ZTI2YTM2NTFlODEyZTAwZO9h5Fs=: 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:41.986 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:41.987 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:41.987 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.987 14:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.558 nvme0n1 00:37:42.558 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.558 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:42.558 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:42.558 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.558 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.558 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: ]] 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:42.819 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:42.820 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:42.820 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:42.820 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:42.820 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:42.820 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:42.820 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.820 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.392 nvme0n1 00:37:43.392 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.392 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:43.392 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:43.392 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.392 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.392 14:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: ]] 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.392 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.338 nvme0n1 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmMzYzlmNTg5YTM5NGMwODE0NDg4Zjk5MWQ2NGQ5ZWRlNThmZjk2MWFkNDVjYWVkIu2OSg==: 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: ]] 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yjc3NDkyMDg3MDZlZTRhZDU4M2U1MjdjNTQyYTAwZjicjn9B: 00:37:44.338 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.339 14:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.908 nvme0n1 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiNmZhYTA5ZmE2N2UxN2RmMTViOTc0OGM4YzQyZmU5YjgxODg3YjZjZWVjNDJjZTM1ZWE4YWNiMmYzNGJhNcIC4Ho=: 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.908 14:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.480 nvme0n1 00:37:45.480 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.480 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:45.480 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:45.480 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.480 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.480 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.480 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:45.480 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:45.480 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.480 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: ]] 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.742 request: 00:37:45.742 { 00:37:45.742 "name": "nvme0", 00:37:45.742 "trtype": "tcp", 00:37:45.742 "traddr": "10.0.0.1", 00:37:45.742 "adrfam": "ipv4", 00:37:45.742 "trsvcid": "4420", 00:37:45.742 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:45.742 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:45.742 "prchk_reftag": false, 00:37:45.742 "prchk_guard": false, 00:37:45.742 "hdgst": false, 00:37:45.742 "ddgst": false, 00:37:45.742 "allow_unrecognized_csi": false, 00:37:45.742 "method": "bdev_nvme_attach_controller", 00:37:45.742 "req_id": 1 00:37:45.742 } 00:37:45.742 Got JSON-RPC error response 00:37:45.742 response: 00:37:45.742 { 00:37:45.742 "code": -5, 00:37:45.742 "message": "Input/output error" 00:37:45.742 } 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.742 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.742 request: 00:37:45.742 { 00:37:45.743 "name": "nvme0", 00:37:45.743 "trtype": "tcp", 00:37:45.743 "traddr": "10.0.0.1", 00:37:45.743 "adrfam": "ipv4", 00:37:45.743 "trsvcid": "4420", 00:37:45.743 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:45.743 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:45.743 "prchk_reftag": false, 00:37:45.743 "prchk_guard": false, 00:37:45.743 "hdgst": false, 00:37:45.743 "ddgst": false, 00:37:45.743 "dhchap_key": "key2", 00:37:45.743 "allow_unrecognized_csi": false, 00:37:45.743 "method": "bdev_nvme_attach_controller", 00:37:45.743 "req_id": 1 00:37:45.743 } 00:37:45.743 Got JSON-RPC error response 00:37:45.743 response: 00:37:45.743 { 00:37:45.743 "code": -5, 00:37:45.743 "message": "Input/output error" 00:37:45.743 } 00:37:45.743 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:45.743 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:45.743 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:45.743 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:45.743 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:45.743 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:37:45.743 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:37:45.743 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.743 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.743 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.003 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.003 request: 00:37:46.003 { 00:37:46.003 "name": "nvme0", 00:37:46.003 "trtype": "tcp", 00:37:46.003 "traddr": "10.0.0.1", 00:37:46.003 "adrfam": "ipv4", 00:37:46.003 "trsvcid": "4420", 00:37:46.003 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:46.003 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:46.003 "prchk_reftag": false, 00:37:46.003 "prchk_guard": false, 00:37:46.003 "hdgst": false, 00:37:46.003 "ddgst": false, 00:37:46.003 "dhchap_key": "key1", 00:37:46.003 "dhchap_ctrlr_key": "ckey2", 00:37:46.003 "allow_unrecognized_csi": false, 00:37:46.004 "method": "bdev_nvme_attach_controller", 00:37:46.004 "req_id": 1 00:37:46.004 } 00:37:46.004 Got JSON-RPC error response 00:37:46.004 response: 00:37:46.004 { 00:37:46.004 "code": -5, 00:37:46.004 "message": "Input/output error" 00:37:46.004 } 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.004 nvme0n1 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: ]] 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.004 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.264 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.264 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:37:46.264 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:37:46.264 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.264 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.264 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.264 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:46.264 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:46.264 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:46.264 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:46.264 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:46.264 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:46.264 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:46.264 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:46.264 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:46.264 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.264 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.264 request: 00:37:46.264 { 00:37:46.264 "name": "nvme0", 00:37:46.264 "dhchap_key": "key1", 00:37:46.264 "dhchap_ctrlr_key": "ckey2", 00:37:46.264 "method": "bdev_nvme_set_keys", 00:37:46.264 "req_id": 1 00:37:46.264 } 00:37:46.264 Got JSON-RPC error response 00:37:46.264 response: 00:37:46.265 { 00:37:46.265 "code": -13, 00:37:46.265 "message": "Permission denied" 00:37:46.265 } 00:37:46.265 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:46.265 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:46.265 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:46.265 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:46.265 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:46.265 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:46.265 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:46.265 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.265 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.265 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.265 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:37:46.265 14:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:37:47.649 14:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:47.649 14:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:47.649 14:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.649 14:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:47.649 14:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:47.649 14:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:37:47.649 14:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:37:48.592 14:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:48.592 14:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:48.592 14:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.592 14:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.592 14:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2JjMmQwZGU2MjNmMTQ4NmFiZDY1ZWRiZDYxYzgxNjJkMTQ1OTRhNzI3YzVkY2E3aQPE3A==: 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: ]] 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk1YTU0Yzg1OTgyYTM1YTQzOGJhYTBjZDMzM2I2MjI0YzRiNDExYjQ5ODI0MzFjSbANag==: 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.592 nvme0n1 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmIwNWYxMjcwZDFjNzM4NzgwMTkxODRlYTJmYWQ3N2b9sFIJ: 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: ]] 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQ1ZTc0NjVlZmViM2RlMTc2YjQ4NDQxMWU0MDBkNDADbLxa: 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.592 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.592 request: 00:37:48.592 { 00:37:48.592 "name": "nvme0", 00:37:48.592 "dhchap_key": "key2", 00:37:48.592 "dhchap_ctrlr_key": "ckey1", 00:37:48.592 "method": "bdev_nvme_set_keys", 00:37:48.592 "req_id": 1 00:37:48.592 } 00:37:48.593 Got JSON-RPC error response 00:37:48.593 response: 00:37:48.593 { 00:37:48.593 "code": -13, 00:37:48.593 "message": "Permission denied" 00:37:48.593 } 00:37:48.593 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:48.593 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:37:48.593 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:48.593 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:48.593 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:48.593 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:37:48.593 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:37:48.593 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.593 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:48.593 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:48.593 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:37:48.855 14:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:37:49.798 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:37:49.798 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:37:49.798 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.798 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:49.798 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.798 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:37:49.798 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:37:49.798 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:37:49.798 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:37:49.798 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:49.798 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:37:49.798 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:49.798 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:37:49.798 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:49.798 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:49.798 rmmod nvme_tcp 00:37:49.798 rmmod nvme_fabrics 00:37:49.798 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:49.798 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:37:49.798 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:37:49.799 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 1954702 ']' 00:37:49.799 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 1954702 00:37:49.799 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1954702 ']' 00:37:49.799 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1954702 00:37:49.799 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:37:49.799 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:49.799 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1954702 00:37:49.799 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:49.799 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:49.799 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1954702' 00:37:49.799 killing process with pid 1954702 00:37:49.799 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1954702 00:37:49.799 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1954702 00:37:50.059 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:50.059 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:50.059 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:50.059 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:37:50.059 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:37:50.059 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:50.059 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:37:50.059 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:50.059 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:50.059 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:50.059 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:50.059 14:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.972 14:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:51.972 14:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:51.972 14:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:51.972 14:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:37:51.972 14:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:37:51.972 14:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:37:51.972 14:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:51.972 14:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:51.972 14:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:51.972 14:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:52.232 14:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:37:52.232 14:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:37:52.232 14:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:56.437 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:56.437 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:56.437 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:56.437 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:56.437 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:56.437 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:56.437 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:56.437 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:56.437 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:56.437 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:56.437 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:56.437 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:56.437 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:56.437 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:56.437 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:56.437 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:56.437 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:56.437 14:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.2Aw /tmp/spdk.key-null.63E /tmp/spdk.key-sha256.FvX /tmp/spdk.key-sha384.NiY /tmp/spdk.key-sha512.7WW /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:37:56.437 14:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:59.740 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:59.740 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:59.740 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:59.740 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:59.740 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:59.740 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:59.740 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:59.740 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:59.740 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:59.740 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:37:59.740 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:59.740 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:59.740 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:59.740 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:59.740 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:59.740 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:59.740 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:00.313 00:38:00.313 real 1m1.378s 00:38:00.313 user 0m54.856s 00:38:00.313 sys 0m16.419s 00:38:00.313 14:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:00.313 14:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.313 ************************************ 00:38:00.313 END TEST nvmf_auth_host 00:38:00.313 ************************************ 00:38:00.313 14:34:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:38:00.313 14:34:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:38:00.313 14:34:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:00.313 14:34:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:00.313 14:34:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.313 ************************************ 00:38:00.313 START TEST nvmf_digest 00:38:00.313 ************************************ 00:38:00.313 14:34:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:38:00.313 * Looking for test storage... 00:38:00.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:00.313 14:34:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:00.313 14:34:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:38:00.313 14:34:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:00.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.575 --rc genhtml_branch_coverage=1 00:38:00.575 --rc genhtml_function_coverage=1 00:38:00.575 --rc genhtml_legend=1 00:38:00.575 --rc geninfo_all_blocks=1 00:38:00.575 --rc geninfo_unexecuted_blocks=1 00:38:00.575 00:38:00.575 ' 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:00.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.575 --rc genhtml_branch_coverage=1 00:38:00.575 --rc genhtml_function_coverage=1 00:38:00.575 --rc genhtml_legend=1 00:38:00.575 --rc geninfo_all_blocks=1 00:38:00.575 --rc geninfo_unexecuted_blocks=1 00:38:00.575 00:38:00.575 ' 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:00.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.575 --rc genhtml_branch_coverage=1 00:38:00.575 --rc genhtml_function_coverage=1 00:38:00.575 --rc genhtml_legend=1 00:38:00.575 --rc geninfo_all_blocks=1 00:38:00.575 --rc geninfo_unexecuted_blocks=1 00:38:00.575 00:38:00.575 ' 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:00.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:00.575 --rc genhtml_branch_coverage=1 00:38:00.575 --rc genhtml_function_coverage=1 00:38:00.575 --rc genhtml_legend=1 00:38:00.575 --rc geninfo_all_blocks=1 00:38:00.575 --rc geninfo_unexecuted_blocks=1 00:38:00.575 00:38:00.575 ' 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:00.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:00.575 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:00.576 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:00.576 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:38:00.576 14:34:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:08.717 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:08.717 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:08.717 Found net devices under 0000:31:00.0: cvl_0_0 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:08.717 Found net devices under 0000:31:00.1: cvl_0_1 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:08.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:08.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:38:08.717 00:38:08.717 --- 10.0.0.2 ping statistics --- 00:38:08.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:08.717 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:38:08.717 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:08.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:08.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:38:08.718 00:38:08.718 --- 10.0.0.1 ping statistics --- 00:38:08.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:08.718 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:08.718 ************************************ 00:38:08.718 START TEST nvmf_digest_clean 00:38:08.718 ************************************ 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=1971878 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 1971878 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1971878 ']' 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:08.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:08.718 14:34:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:08.718 [2024-10-13 14:34:11.775241] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:38:08.718 [2024-10-13 14:34:11.775286] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:08.718 [2024-10-13 14:34:11.911913] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:08.718 [2024-10-13 14:34:11.960827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:08.718 [2024-10-13 14:34:11.978237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:08.718 [2024-10-13 14:34:11.978269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:08.718 [2024-10-13 14:34:11.978277] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:08.718 [2024-10-13 14:34:11.978284] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:08.718 [2024-10-13 14:34:11.978289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:08.718 [2024-10-13 14:34:11.978880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:08.979 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:08.979 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:38:08.979 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:08.979 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:08.979 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:08.979 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:08.979 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:38:08.979 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:38:08.979 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:38:08.979 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.979 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:09.240 null0 00:38:09.240 [2024-10-13 14:34:12.702307] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:09.240 [2024-10-13 14:34:12.726526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:09.240 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.240 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:38:09.240 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:09.240 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:09.240 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:38:09.240 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:38:09.240 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:38:09.240 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:09.240 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1971944 00:38:09.240 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1971944 /var/tmp/bperf.sock 00:38:09.240 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1971944 ']' 00:38:09.240 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:38:09.240 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:09.240 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:09.240 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:09.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:09.240 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:09.240 14:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:09.240 [2024-10-13 14:34:12.785560] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:38:09.240 [2024-10-13 14:34:12.785623] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1971944 ] 00:38:09.240 [2024-10-13 14:34:12.920558] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:09.501 [2024-10-13 14:34:12.969003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:09.501 [2024-10-13 14:34:12.997294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:10.072 14:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:10.072 14:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:38:10.072 14:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:10.072 14:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:10.072 14:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:10.333 14:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:10.333 14:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:10.625 nvme0n1 00:38:10.625 14:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:10.625 14:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:10.625 Running I/O for 2 seconds... 00:38:12.585 19249.00 IOPS, 75.19 MiB/s [2024-10-13T12:34:16.292Z] 20836.00 IOPS, 81.39 MiB/s 00:38:12.585 Latency(us) 00:38:12.585 [2024-10-13T12:34:16.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:12.585 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:12.585 nvme0n1 : 2.00 20860.78 81.49 0.00 0.00 6129.06 2805.48 16093.87 00:38:12.585 [2024-10-13T12:34:16.292Z] =================================================================================================================== 00:38:12.585 [2024-10-13T12:34:16.292Z] Total : 20860.78 81.49 0.00 0.00 6129.06 2805.48 16093.87 00:38:12.585 { 00:38:12.585 "results": [ 00:38:12.585 { 00:38:12.585 "job": "nvme0n1", 00:38:12.585 "core_mask": "0x2", 00:38:12.585 "workload": "randread", 00:38:12.585 "status": "finished", 00:38:12.585 "queue_depth": 128, 00:38:12.585 "io_size": 4096, 00:38:12.585 "runtime": 2.00376, 00:38:12.585 "iops": 20860.78173034695, 00:38:12.585 "mibps": 81.48742863416777, 00:38:12.585 "io_failed": 0, 00:38:12.585 "io_timeout": 0, 00:38:12.585 "avg_latency_us": 6129.05965146746, 00:38:12.585 "min_latency_us": 2805.4794520547944, 00:38:12.585 "max_latency_us": 16093.872368860675 00:38:12.585 } 00:38:12.585 ], 00:38:12.585 "core_count": 1 00:38:12.585 } 00:38:12.585 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:12.585 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:12.585 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:12.846 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:12.846 | select(.opcode=="crc32c") 00:38:12.846 | "\(.module_name) \(.executed)"' 00:38:12.846 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:12.846 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:12.846 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:12.846 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:12.846 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:12.846 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1971944 00:38:12.846 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1971944 ']' 00:38:12.846 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1971944 00:38:12.846 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:38:12.846 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:12.846 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1971944 00:38:12.846 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:12.846 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:12.846 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1971944' 00:38:12.846 killing process with pid 1971944 00:38:12.846 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1971944 00:38:12.846 Received shutdown signal, test time was about 2.000000 seconds 00:38:12.846 00:38:12.846 Latency(us) 00:38:12.846 [2024-10-13T12:34:16.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:12.846 [2024-10-13T12:34:16.553Z] =================================================================================================================== 00:38:12.846 [2024-10-13T12:34:16.553Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:12.846 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1971944 00:38:13.107 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:38:13.107 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:13.107 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:13.107 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:38:13.107 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:38:13.107 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:38:13.107 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:13.107 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1972731 00:38:13.107 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1972731 /var/tmp/bperf.sock 00:38:13.107 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1972731 ']' 00:38:13.107 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:13.107 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:13.107 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:13.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:13.107 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:13.107 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:13.107 14:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:38:13.107 [2024-10-13 14:34:16.653225] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:38:13.107 [2024-10-13 14:34:16.653279] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972731 ] 00:38:13.107 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:13.107 Zero copy mechanism will not be used. 00:38:13.107 [2024-10-13 14:34:16.783512] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:13.368 [2024-10-13 14:34:16.831692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:13.368 [2024-10-13 14:34:16.847994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:13.939 14:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:13.939 14:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:38:13.939 14:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:13.939 14:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:13.939 14:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:14.200 14:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:14.200 14:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:14.476 nvme0n1 00:38:14.477 14:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:14.477 14:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:14.477 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:14.477 Zero copy mechanism will not be used. 00:38:14.477 Running I/O for 2 seconds... 00:38:16.816 3504.00 IOPS, 438.00 MiB/s [2024-10-13T12:34:20.523Z] 3195.50 IOPS, 399.44 MiB/s 00:38:16.816 Latency(us) 00:38:16.816 [2024-10-13T12:34:20.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:16.816 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:38:16.816 nvme0n1 : 2.01 3193.84 399.23 0.00 0.00 5006.78 957.97 14670.60 00:38:16.816 [2024-10-13T12:34:20.523Z] =================================================================================================================== 00:38:16.816 [2024-10-13T12:34:20.523Z] Total : 3193.84 399.23 0.00 0.00 5006.78 957.97 14670.60 00:38:16.816 { 00:38:16.816 "results": [ 00:38:16.816 { 00:38:16.816 "job": "nvme0n1", 00:38:16.816 "core_mask": "0x2", 00:38:16.816 "workload": "randread", 00:38:16.816 "status": "finished", 00:38:16.816 "queue_depth": 16, 00:38:16.816 "io_size": 131072, 00:38:16.816 "runtime": 2.006047, 00:38:16.816 "iops": 3193.8434144364514, 00:38:16.816 "mibps": 399.23042680455643, 00:38:16.816 "io_failed": 0, 00:38:16.816 "io_timeout": 0, 00:38:16.816 "avg_latency_us": 5006.782835617013, 00:38:16.816 "min_latency_us": 957.968593384564, 00:38:16.816 "max_latency_us": 14670.604744403608 00:38:16.816 } 00:38:16.816 ], 00:38:16.816 "core_count": 1 00:38:16.816 } 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:16.816 | select(.opcode=="crc32c") 00:38:16.816 | "\(.module_name) \(.executed)"' 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1972731 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1972731 ']' 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1972731 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1972731 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1972731' 00:38:16.816 killing process with pid 1972731 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1972731 00:38:16.816 Received shutdown signal, test time was about 2.000000 seconds 00:38:16.816 00:38:16.816 Latency(us) 00:38:16.816 [2024-10-13T12:34:20.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:16.816 [2024-10-13T12:34:20.523Z] =================================================================================================================== 00:38:16.816 [2024-10-13T12:34:20.523Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1972731 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1973499 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1973499 /var/tmp/bperf.sock 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1973499 ']' 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:16.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:16.816 14:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:16.816 [2024-10-13 14:34:20.501077] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:38:16.816 [2024-10-13 14:34:20.501135] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1973499 ] 00:38:17.077 [2024-10-13 14:34:20.631856] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:17.077 [2024-10-13 14:34:20.680993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:17.077 [2024-10-13 14:34:20.696697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:17.647 14:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:17.647 14:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:38:17.647 14:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:17.647 14:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:17.647 14:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:17.906 14:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:17.906 14:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:18.167 nvme0n1 00:38:18.167 14:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:18.167 14:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:18.428 Running I/O for 2 seconds... 00:38:20.309 30295.00 IOPS, 118.34 MiB/s [2024-10-13T12:34:24.016Z] 29959.50 IOPS, 117.03 MiB/s 00:38:20.309 Latency(us) 00:38:20.309 [2024-10-13T12:34:24.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:20.309 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:20.309 nvme0n1 : 2.00 29958.03 117.02 0.00 0.00 4265.70 2217.01 10510.28 00:38:20.309 [2024-10-13T12:34:24.016Z] =================================================================================================================== 00:38:20.309 [2024-10-13T12:34:24.016Z] Total : 29958.03 117.02 0.00 0.00 4265.70 2217.01 10510.28 00:38:20.309 { 00:38:20.309 "results": [ 00:38:20.309 { 00:38:20.309 "job": "nvme0n1", 00:38:20.309 "core_mask": "0x2", 00:38:20.309 "workload": "randwrite", 00:38:20.309 "status": "finished", 00:38:20.309 "queue_depth": 128, 00:38:20.309 "io_size": 4096, 00:38:20.309 "runtime": 2.004104, 00:38:20.309 "iops": 29958.02613038046, 00:38:20.309 "mibps": 117.02353957179866, 00:38:20.309 "io_failed": 0, 00:38:20.309 "io_timeout": 0, 00:38:20.309 "avg_latency_us": 4265.701214246379, 00:38:20.309 "min_latency_us": 2217.013030404277, 00:38:20.309 "max_latency_us": 10510.283995990645 00:38:20.309 } 00:38:20.309 ], 00:38:20.309 "core_count": 1 00:38:20.309 } 00:38:20.309 14:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:20.309 14:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:20.309 14:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:20.309 14:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:20.309 | select(.opcode=="crc32c") 00:38:20.309 | "\(.module_name) \(.executed)"' 00:38:20.309 14:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1973499 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1973499 ']' 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1973499 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1973499 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1973499' 00:38:20.570 killing process with pid 1973499 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1973499 00:38:20.570 Received shutdown signal, test time was about 2.000000 seconds 00:38:20.570 00:38:20.570 Latency(us) 00:38:20.570 [2024-10-13T12:34:24.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:20.570 [2024-10-13T12:34:24.277Z] =================================================================================================================== 00:38:20.570 [2024-10-13T12:34:24.277Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1973499 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1974264 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1974264 /var/tmp/bperf.sock 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1974264 ']' 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:20.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:20.570 14:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:38:20.830 [2024-10-13 14:34:24.299825] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:38:20.831 [2024-10-13 14:34:24.299881] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974264 ] 00:38:20.831 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:20.831 Zero copy mechanism will not be used. 00:38:20.831 [2024-10-13 14:34:24.429973] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:20.831 [2024-10-13 14:34:24.478711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.831 [2024-10-13 14:34:24.494835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:21.401 14:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:21.401 14:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:38:21.401 14:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:21.401 14:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:21.401 14:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:21.661 14:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:21.662 14:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:21.921 nvme0n1 00:38:21.922 14:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:21.922 14:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:22.182 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:22.182 Zero copy mechanism will not be used. 00:38:22.182 Running I/O for 2 seconds... 00:38:24.063 6189.00 IOPS, 773.62 MiB/s [2024-10-13T12:34:27.770Z] 4742.50 IOPS, 592.81 MiB/s 00:38:24.063 Latency(us) 00:38:24.063 [2024-10-13T12:34:27.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:24.063 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:38:24.063 nvme0n1 : 2.01 4736.78 592.10 0.00 0.00 3371.03 1348.00 7937.45 00:38:24.063 [2024-10-13T12:34:27.770Z] =================================================================================================================== 00:38:24.063 [2024-10-13T12:34:27.770Z] Total : 4736.78 592.10 0.00 0.00 3371.03 1348.00 7937.45 00:38:24.063 { 00:38:24.063 "results": [ 00:38:24.063 { 00:38:24.063 "job": "nvme0n1", 00:38:24.063 "core_mask": "0x2", 00:38:24.063 "workload": "randwrite", 00:38:24.063 "status": "finished", 00:38:24.063 "queue_depth": 16, 00:38:24.063 "io_size": 131072, 00:38:24.063 "runtime": 2.005793, 00:38:24.063 "iops": 4736.779916970495, 00:38:24.063 "mibps": 592.0974896213119, 00:38:24.063 "io_failed": 0, 00:38:24.063 "io_timeout": 0, 00:38:24.063 "avg_latency_us": 3371.031948278573, 00:38:24.063 "min_latency_us": 1347.9986635482794, 00:38:24.063 "max_latency_us": 7937.454059472102 00:38:24.063 } 00:38:24.063 ], 00:38:24.063 "core_count": 1 00:38:24.063 } 00:38:24.063 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:24.063 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:24.063 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:24.063 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:24.063 | select(.opcode=="crc32c") 00:38:24.063 | "\(.module_name) \(.executed)"' 00:38:24.063 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:24.326 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1974264 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1974264 ']' 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1974264 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1974264 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1974264' 00:38:24.327 killing process with pid 1974264 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1974264 00:38:24.327 Received shutdown signal, test time was about 2.000000 seconds 00:38:24.327 00:38:24.327 Latency(us) 00:38:24.327 [2024-10-13T12:34:28.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:24.327 [2024-10-13T12:34:28.034Z] =================================================================================================================== 00:38:24.327 [2024-10-13T12:34:28.034Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1974264 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1971878 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1971878 ']' 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1971878 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:24.327 14:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1971878 00:38:24.588 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:24.588 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:24.588 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1971878' 00:38:24.588 killing process with pid 1971878 00:38:24.588 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1971878 00:38:24.588 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1971878 00:38:24.588 00:38:24.588 real 0m16.437s 00:38:24.588 user 0m32.041s 00:38:24.588 sys 0m3.569s 00:38:24.588 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:24.588 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:24.588 ************************************ 00:38:24.588 END TEST nvmf_digest_clean 00:38:24.588 ************************************ 00:38:24.589 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:38:24.589 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:24.589 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:24.589 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:24.589 ************************************ 00:38:24.589 START TEST nvmf_digest_error 00:38:24.589 ************************************ 00:38:24.589 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:38:24.589 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:38:24.589 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:24.589 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:24.589 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:24.589 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=1974994 00:38:24.589 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 1974994 00:38:24.589 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:38:24.589 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1974994 ']' 00:38:24.589 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:24.589 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:24.589 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:24.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:24.589 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:24.589 14:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:24.589 [2024-10-13 14:34:28.292694] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:38:24.589 [2024-10-13 14:34:28.292751] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:24.848 [2024-10-13 14:34:28.433291] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:24.848 [2024-10-13 14:34:28.480054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.848 [2024-10-13 14:34:28.502268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:24.848 [2024-10-13 14:34:28.502308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:24.848 [2024-10-13 14:34:28.502315] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:24.848 [2024-10-13 14:34:28.502321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:24.848 [2024-10-13 14:34:28.502326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:24.848 [2024-10-13 14:34:28.502944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:25.417 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:25.417 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:38:25.417 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:25.417 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:25.417 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:25.417 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:25.417 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:38:25.417 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.417 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:25.676 [2024-10-13 14:34:29.123274] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:25.676 null0 00:38:25.676 [2024-10-13 14:34:29.195537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:25.676 [2024-10-13 14:34:29.219683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1975146 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1975146 /var/tmp/bperf.sock 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1975146 ']' 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:25.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:25.676 14:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:25.676 [2024-10-13 14:34:29.275633] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:38:25.676 [2024-10-13 14:34:29.275679] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975146 ] 00:38:25.936 [2024-10-13 14:34:29.406245] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:25.936 [2024-10-13 14:34:29.453356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:25.936 [2024-10-13 14:34:29.470049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:26.505 14:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:26.505 14:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:38:26.505 14:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:26.505 14:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:26.764 14:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:26.764 14:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.764 14:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:26.764 14:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.764 14:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:26.764 14:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:27.025 nvme0n1 00:38:27.025 14:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:38:27.025 14:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.025 14:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:27.025 14:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.025 14:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:27.025 14:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:27.025 Running I/O for 2 seconds... 00:38:27.025 [2024-10-13 14:34:30.640485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.025 [2024-10-13 14:34:30.640513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.025 [2024-10-13 14:34:30.640522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.025 [2024-10-13 14:34:30.652002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.025 [2024-10-13 14:34:30.652022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.025 [2024-10-13 14:34:30.652029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.025 [2024-10-13 14:34:30.663080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.025 [2024-10-13 14:34:30.663098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.025 [2024-10-13 14:34:30.663105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.025 [2024-10-13 14:34:30.671623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.025 [2024-10-13 14:34:30.671642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.025 [2024-10-13 14:34:30.671648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.025 [2024-10-13 14:34:30.680735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.025 [2024-10-13 14:34:30.680752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.025 [2024-10-13 14:34:30.680763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.025 [2024-10-13 14:34:30.690288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.025 [2024-10-13 14:34:30.690306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.025 [2024-10-13 14:34:30.690312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.025 [2024-10-13 14:34:30.699688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.025 [2024-10-13 14:34:30.699705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.025 [2024-10-13 14:34:30.699713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.025 [2024-10-13 14:34:30.707927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.025 [2024-10-13 14:34:30.707944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.025 [2024-10-13 14:34:30.707951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.025 [2024-10-13 14:34:30.716763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.025 [2024-10-13 14:34:30.716781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.025 [2024-10-13 14:34:30.716787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.025 [2024-10-13 14:34:30.725950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.025 [2024-10-13 14:34:30.725968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.025 [2024-10-13 14:34:30.725975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.735844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.735862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.735868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.743177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.743194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.743203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.755269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.755287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.755294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.767204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.767228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.767235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.779673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.779690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.779697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.790483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.790500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.790506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.798882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.798899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.798905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.808189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.808205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.808212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.816814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.816831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.816838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.825719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.825736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.825742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.835033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.835050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.835057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.843573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.843590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.843596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.852164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.852181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.852188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.861546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.861563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.861570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.870535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.870553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.870559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.879172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.879189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.879195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.888492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.888510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.888516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.897667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.897684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.897691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.906174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.906191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.906197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.915235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.915252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.915258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.924053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.924075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.924084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.933373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.933391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.933397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.942576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.942593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.942600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.951209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.951226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.951232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.958847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.287 [2024-10-13 14:34:30.958864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.287 [2024-10-13 14:34:30.958872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.287 [2024-10-13 14:34:30.969393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.288 [2024-10-13 14:34:30.969410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.288 [2024-10-13 14:34:30.969417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.288 [2024-10-13 14:34:30.978989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.288 [2024-10-13 14:34:30.979007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.288 [2024-10-13 14:34:30.979013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.288 [2024-10-13 14:34:30.987564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.288 [2024-10-13 14:34:30.987581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.288 [2024-10-13 14:34:30.987587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.548 [2024-10-13 14:34:30.995463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.548 [2024-10-13 14:34:30.995479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.548 [2024-10-13 14:34:30.995486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.548 [2024-10-13 14:34:31.005197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.548 [2024-10-13 14:34:31.005215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.548 [2024-10-13 14:34:31.005222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.548 [2024-10-13 14:34:31.014694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.548 [2024-10-13 14:34:31.014712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.548 [2024-10-13 14:34:31.014718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.548 [2024-10-13 14:34:31.023216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.548 [2024-10-13 14:34:31.023234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.548 [2024-10-13 14:34:31.023240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.548 [2024-10-13 14:34:31.031396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.548 [2024-10-13 14:34:31.031413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.548 [2024-10-13 14:34:31.031419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.041278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.041296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.041302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.049189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.049206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.049212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.060386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.060403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.060409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.069273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.069290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.069297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.078657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.078674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.078684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.087563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.087580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.087586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.095442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.095460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.095466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.105180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.105197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.105203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.113733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.113750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.113756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.122386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.122403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.122410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.131554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.131571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.131578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.140810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.140827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.140833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.149632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.149649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.149655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.158488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.158508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.158514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.167256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.167273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.167280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.176255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.176272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.176278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.184987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.185004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.185010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.193865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.193882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.193888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.202546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.202563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.202569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.211789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.211806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.211812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.220518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.220535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.220541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.230440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.230457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.230464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.239448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.239465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.239471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.549 [2024-10-13 14:34:31.249152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.549 [2024-10-13 14:34:31.249169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.549 [2024-10-13 14:34:31.249175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.258492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.258510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.258516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.266820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.266837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.266843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.276563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.276580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.276587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.284374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.284391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.284397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.293718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.293735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.293741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.303304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.303321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.303327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.310986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.311003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.311012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.320611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.320628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.320634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.330555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.330572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.330578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.339108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.339124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.339130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.347349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.347366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.347372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.356840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.356858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.356864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.366988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.367005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.367012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.376183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.376200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.376207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.383863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.383881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.383888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.393681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.393698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.393705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.402409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.402426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.402432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.411235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.411252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.411259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.421023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.421041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.421047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.429753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.429769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.429776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.438865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.438882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.438888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.811 [2024-10-13 14:34:31.447981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.811 [2024-10-13 14:34:31.447998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.811 [2024-10-13 14:34:31.448004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.812 [2024-10-13 14:34:31.456580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.812 [2024-10-13 14:34:31.456597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.812 [2024-10-13 14:34:31.456604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.812 [2024-10-13 14:34:31.465984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.812 [2024-10-13 14:34:31.466002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.812 [2024-10-13 14:34:31.466011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.812 [2024-10-13 14:34:31.475028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.812 [2024-10-13 14:34:31.475045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.812 [2024-10-13 14:34:31.475051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.812 [2024-10-13 14:34:31.483859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.812 [2024-10-13 14:34:31.483876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.812 [2024-10-13 14:34:31.483882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.812 [2024-10-13 14:34:31.491566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.812 [2024-10-13 14:34:31.491583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.812 [2024-10-13 14:34:31.491589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.812 [2024-10-13 14:34:31.501278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.812 [2024-10-13 14:34:31.501294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.812 [2024-10-13 14:34:31.501301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:27.812 [2024-10-13 14:34:31.510422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:27.812 [2024-10-13 14:34:31.510439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:27.812 [2024-10-13 14:34:31.510445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.072 [2024-10-13 14:34:31.519338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.072 [2024-10-13 14:34:31.519355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.072 [2024-10-13 14:34:31.519361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.072 [2024-10-13 14:34:31.528321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.072 [2024-10-13 14:34:31.528337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.072 [2024-10-13 14:34:31.528344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.072 [2024-10-13 14:34:31.537634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.072 [2024-10-13 14:34:31.537650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.072 [2024-10-13 14:34:31.537657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.072 [2024-10-13 14:34:31.545605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.072 [2024-10-13 14:34:31.545626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.072 [2024-10-13 14:34:31.545632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.072 [2024-10-13 14:34:31.555355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.072 [2024-10-13 14:34:31.555372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.072 [2024-10-13 14:34:31.555380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.564345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.564362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.564368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.573647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.573664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.573670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.582216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.582233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.582239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.591473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.591490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.591496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.601434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.601450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.601457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.610585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.610601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.610608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.620281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.620297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.620304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 27602.00 IOPS, 107.82 MiB/s [2024-10-13T12:34:31.780Z] [2024-10-13 14:34:31.629646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.629662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.629669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.638317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.638333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.638340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.647263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.647279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.647286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.656540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.656557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.656563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.664942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.664959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.664965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.675792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.675809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.675816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.684090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.684107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.684113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.692897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.692914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.692920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.701970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.701987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.701996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.711383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.711401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.711407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.722323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.722340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.722346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.730465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.730481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.730487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.739495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.739512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.739518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.748378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.748394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.748400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.757935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.757952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.757958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.765274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.765291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.765297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.073 [2024-10-13 14:34:31.776785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.073 [2024-10-13 14:34:31.776801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.073 [2024-10-13 14:34:31.776807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.334 [2024-10-13 14:34:31.786231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.334 [2024-10-13 14:34:31.786248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.334 [2024-10-13 14:34:31.786254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.334 [2024-10-13 14:34:31.795709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.334 [2024-10-13 14:34:31.795725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.334 [2024-10-13 14:34:31.795732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.334 [2024-10-13 14:34:31.804191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.334 [2024-10-13 14:34:31.804208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.334 [2024-10-13 14:34:31.804214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.334 [2024-10-13 14:34:31.814195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.334 [2024-10-13 14:34:31.814211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.334 [2024-10-13 14:34:31.814218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.334 [2024-10-13 14:34:31.823831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.334 [2024-10-13 14:34:31.823848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.334 [2024-10-13 14:34:31.823855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.334 [2024-10-13 14:34:31.832122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.334 [2024-10-13 14:34:31.832139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.334 [2024-10-13 14:34:31.832145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.334 [2024-10-13 14:34:31.841171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.334 [2024-10-13 14:34:31.841188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.334 [2024-10-13 14:34:31.841194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.334 [2024-10-13 14:34:31.849846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.334 [2024-10-13 14:34:31.849862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.334 [2024-10-13 14:34:31.849869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.334 [2024-10-13 14:34:31.859254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.334 [2024-10-13 14:34:31.859270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.334 [2024-10-13 14:34:31.859279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.334 [2024-10-13 14:34:31.867683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.334 [2024-10-13 14:34:31.867700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.334 [2024-10-13 14:34:31.867706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.334 [2024-10-13 14:34:31.875874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.334 [2024-10-13 14:34:31.875891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.334 [2024-10-13 14:34:31.875897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.334 [2024-10-13 14:34:31.885570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.334 [2024-10-13 14:34:31.885587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.334 [2024-10-13 14:34:31.885593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.334 [2024-10-13 14:34:31.894779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.334 [2024-10-13 14:34:31.894795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.334 [2024-10-13 14:34:31.894802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.334 [2024-10-13 14:34:31.903388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.334 [2024-10-13 14:34:31.903404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.334 [2024-10-13 14:34:31.903411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.334 [2024-10-13 14:34:31.911805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.334 [2024-10-13 14:34:31.911822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.334 [2024-10-13 14:34:31.911828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.334 [2024-10-13 14:34:31.921495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.334 [2024-10-13 14:34:31.921511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.335 [2024-10-13 14:34:31.921517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.335 [2024-10-13 14:34:31.929450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.335 [2024-10-13 14:34:31.929467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.335 [2024-10-13 14:34:31.929473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.335 [2024-10-13 14:34:31.938748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.335 [2024-10-13 14:34:31.938771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.335 [2024-10-13 14:34:31.938778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.335 [2024-10-13 14:34:31.948644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.335 [2024-10-13 14:34:31.948661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.335 [2024-10-13 14:34:31.948667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.335 [2024-10-13 14:34:31.959285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.335 [2024-10-13 14:34:31.959301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.335 [2024-10-13 14:34:31.959307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.335 [2024-10-13 14:34:31.968738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.335 [2024-10-13 14:34:31.968755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.335 [2024-10-13 14:34:31.968761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.335 [2024-10-13 14:34:31.977970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.335 [2024-10-13 14:34:31.977987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.335 [2024-10-13 14:34:31.977993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.335 [2024-10-13 14:34:31.986632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.335 [2024-10-13 14:34:31.986649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.335 [2024-10-13 14:34:31.986655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.335 [2024-10-13 14:34:31.996682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.335 [2024-10-13 14:34:31.996699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.335 [2024-10-13 14:34:31.996705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.335 [2024-10-13 14:34:32.004594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.335 [2024-10-13 14:34:32.004611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.335 [2024-10-13 14:34:32.004617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.335 [2024-10-13 14:34:32.014235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.335 [2024-10-13 14:34:32.014251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.335 [2024-10-13 14:34:32.014258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.335 [2024-10-13 14:34:32.022517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.335 [2024-10-13 14:34:32.022534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.335 [2024-10-13 14:34:32.022540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.335 [2024-10-13 14:34:32.031617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.335 [2024-10-13 14:34:32.031634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.335 [2024-10-13 14:34:32.031640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.042314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.042331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.042338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.049716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.049732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.049738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.059653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.059670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.059677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.067917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.067934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.067940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.077402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.077419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.077425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.086426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.086443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.086449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.094432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.094449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.094458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.104818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.104835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.104841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.113015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.113032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.113038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.122026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.122042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.122049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.130233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.130250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.130256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.140013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.140030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.140037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.148858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.148874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.148880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.157184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.157200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.157207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.166133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.166150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.166156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.174972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.174988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.174994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.184071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.184087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.184094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.193910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.193927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.193933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.202753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.202770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.202776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.212339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.212356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.212362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.220386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.220403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.220409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.596 [2024-10-13 14:34:32.229960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.596 [2024-10-13 14:34:32.229977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.596 [2024-10-13 14:34:32.229983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.597 [2024-10-13 14:34:32.239109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.597 [2024-10-13 14:34:32.239125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.597 [2024-10-13 14:34:32.239131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.597 [2024-10-13 14:34:32.247638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.597 [2024-10-13 14:34:32.247654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.597 [2024-10-13 14:34:32.247664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.597 [2024-10-13 14:34:32.257339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.597 [2024-10-13 14:34:32.257356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.597 [2024-10-13 14:34:32.257362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.597 [2024-10-13 14:34:32.266026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.597 [2024-10-13 14:34:32.266043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.597 [2024-10-13 14:34:32.266049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.597 [2024-10-13 14:34:32.275294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.597 [2024-10-13 14:34:32.275310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.597 [2024-10-13 14:34:32.275317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.597 [2024-10-13 14:34:32.283688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.597 [2024-10-13 14:34:32.283704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.597 [2024-10-13 14:34:32.283710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.597 [2024-10-13 14:34:32.294148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.597 [2024-10-13 14:34:32.294164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.597 [2024-10-13 14:34:32.294170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.302159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.302176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.302182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.311089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.311105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.311112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.319850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.319866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.319873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.329079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.329099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.329105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.337415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.337431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.337438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.346722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.346739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.346746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.355243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.355260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.355266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.363653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.363670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.363676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.373477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.373493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.373500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.382665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.382682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.382688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.390892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.390909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.390916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.399907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.399924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.399930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.408952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.408970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.408976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.417043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.417060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.417070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.426839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.426855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.426862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.435180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.435197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.435203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.444656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.444673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.444679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.452449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.452466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.452472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.462549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.462566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.462572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.472280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.472296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.472303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.480239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.480256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.480265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.489995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.490012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.490019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.858 [2024-10-13 14:34:32.499693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.858 [2024-10-13 14:34:32.499711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.858 [2024-10-13 14:34:32.499718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.859 [2024-10-13 14:34:32.507917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.859 [2024-10-13 14:34:32.507933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.859 [2024-10-13 14:34:32.507941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.859 [2024-10-13 14:34:32.517972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.859 [2024-10-13 14:34:32.517989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.859 [2024-10-13 14:34:32.517996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.859 [2024-10-13 14:34:32.527194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.859 [2024-10-13 14:34:32.527211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.859 [2024-10-13 14:34:32.527218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.859 [2024-10-13 14:34:32.537040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.859 [2024-10-13 14:34:32.537057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.859 [2024-10-13 14:34:32.537068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.859 [2024-10-13 14:34:32.544880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.859 [2024-10-13 14:34:32.544896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.859 [2024-10-13 14:34:32.544903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.859 [2024-10-13 14:34:32.553089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.859 [2024-10-13 14:34:32.553106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.859 [2024-10-13 14:34:32.553112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:28.859 [2024-10-13 14:34:32.562695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:28.859 [2024-10-13 14:34:32.562712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:28.859 [2024-10-13 14:34:32.562719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.119 [2024-10-13 14:34:32.572619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:29.119 [2024-10-13 14:34:32.572637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.119 [2024-10-13 14:34:32.572643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.119 [2024-10-13 14:34:32.580920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:29.119 [2024-10-13 14:34:32.580936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.119 [2024-10-13 14:34:32.580943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.119 [2024-10-13 14:34:32.589978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:29.119 [2024-10-13 14:34:32.589995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.119 [2024-10-13 14:34:32.590001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.119 [2024-10-13 14:34:32.598913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:29.119 [2024-10-13 14:34:32.598930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.119 [2024-10-13 14:34:32.598936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.119 [2024-10-13 14:34:32.608041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:29.119 [2024-10-13 14:34:32.608058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.119 [2024-10-13 14:34:32.608075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.119 [2024-10-13 14:34:32.616793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:29.119 [2024-10-13 14:34:32.616810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.119 [2024-10-13 14:34:32.616816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.119 27875.00 IOPS, 108.89 MiB/s [2024-10-13T12:34:32.826Z] [2024-10-13 14:34:32.625102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1319270) 00:38:29.119 [2024-10-13 14:34:32.625119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:29.120 [2024-10-13 14:34:32.625125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:29.120 00:38:29.120 Latency(us) 00:38:29.120 [2024-10-13T12:34:32.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:29.120 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:29.120 nvme0n1 : 2.00 27881.33 108.91 0.00 0.00 4585.06 2203.33 15436.98 00:38:29.120 [2024-10-13T12:34:32.827Z] =================================================================================================================== 00:38:29.120 [2024-10-13T12:34:32.827Z] Total : 27881.33 108.91 0.00 0.00 4585.06 2203.33 15436.98 00:38:29.120 { 00:38:29.120 "results": [ 00:38:29.120 { 00:38:29.120 "job": "nvme0n1", 00:38:29.120 "core_mask": "0x2", 00:38:29.120 "workload": "randread", 00:38:29.120 "status": "finished", 00:38:29.120 "queue_depth": 128, 00:38:29.120 "io_size": 4096, 00:38:29.120 "runtime": 2.004137, 00:38:29.120 "iops": 27881.327474119782, 00:38:29.120 "mibps": 108.9114354457804, 00:38:29.120 "io_failed": 0, 00:38:29.120 "io_timeout": 0, 00:38:29.120 "avg_latency_us": 4585.057727847672, 00:38:29.120 "min_latency_us": 2203.327764784497, 00:38:29.120 "max_latency_us": 15436.97961911126 00:38:29.120 } 00:38:29.120 ], 00:38:29.120 "core_count": 1 00:38:29.120 } 00:38:29.120 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:29.120 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:29.120 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:29.120 | .driver_specific 00:38:29.120 | .nvme_error 00:38:29.120 | .status_code 00:38:29.120 | .command_transient_transport_error' 00:38:29.120 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:29.120 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 219 > 0 )) 00:38:29.120 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1975146 00:38:29.120 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1975146 ']' 00:38:29.120 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1975146 00:38:29.120 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1975146 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1975146' 00:38:29.380 killing process with pid 1975146 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1975146 00:38:29.380 Received shutdown signal, test time was about 2.000000 seconds 00:38:29.380 00:38:29.380 Latency(us) 00:38:29.380 [2024-10-13T12:34:33.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:29.380 [2024-10-13T12:34:33.087Z] =================================================================================================================== 00:38:29.380 [2024-10-13T12:34:33.087Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1975146 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1975892 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1975892 /var/tmp/bperf.sock 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1975892 ']' 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:29.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:29.380 14:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:29.380 [2024-10-13 14:34:33.037713] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:38:29.380 [2024-10-13 14:34:33.037767] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975892 ] 00:38:29.380 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:29.380 Zero copy mechanism will not be used. 00:38:29.641 [2024-10-13 14:34:33.167932] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:29.641 [2024-10-13 14:34:33.215409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:29.641 [2024-10-13 14:34:33.231557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:30.210 14:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:30.210 14:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:38:30.210 14:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:30.210 14:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:30.470 14:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:30.470 14:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.470 14:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:30.470 14:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:30.470 14:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:30.470 14:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:30.731 nvme0n1 00:38:30.731 14:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:38:30.731 14:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.731 14:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:30.731 14:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:30.731 14:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:30.731 14:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:30.731 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:30.731 Zero copy mechanism will not be used. 00:38:30.731 Running I/O for 2 seconds... 00:38:30.992 [2024-10-13 14:34:34.437340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.992 [2024-10-13 14:34:34.437370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.992 [2024-10-13 14:34:34.437379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.992 [2024-10-13 14:34:34.446824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.992 [2024-10-13 14:34:34.446845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.992 [2024-10-13 14:34:34.446852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.992 [2024-10-13 14:34:34.457249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.992 [2024-10-13 14:34:34.457268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.992 [2024-10-13 14:34:34.457275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.992 [2024-10-13 14:34:34.468857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.992 [2024-10-13 14:34:34.468876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.992 [2024-10-13 14:34:34.468882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.992 [2024-10-13 14:34:34.476176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.992 [2024-10-13 14:34:34.476194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.992 [2024-10-13 14:34:34.476201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.992 [2024-10-13 14:34:34.485685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.992 [2024-10-13 14:34:34.485703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.992 [2024-10-13 14:34:34.485710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.992 [2024-10-13 14:34:34.496163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.992 [2024-10-13 14:34:34.496181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.992 [2024-10-13 14:34:34.496187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.992 [2024-10-13 14:34:34.507277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.992 [2024-10-13 14:34:34.507295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.992 [2024-10-13 14:34:34.507301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.992 [2024-10-13 14:34:34.519305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.992 [2024-10-13 14:34:34.519323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.992 [2024-10-13 14:34:34.519330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.992 [2024-10-13 14:34:34.528559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.992 [2024-10-13 14:34:34.528577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.992 [2024-10-13 14:34:34.528583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.992 [2024-10-13 14:34:34.539757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.992 [2024-10-13 14:34:34.539775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.992 [2024-10-13 14:34:34.539782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.992 [2024-10-13 14:34:34.550655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.992 [2024-10-13 14:34:34.550674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.992 [2024-10-13 14:34:34.550681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.992 [2024-10-13 14:34:34.559495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.992 [2024-10-13 14:34:34.559513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.992 [2024-10-13 14:34:34.559519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.992 [2024-10-13 14:34:34.570625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.992 [2024-10-13 14:34:34.570644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.992 [2024-10-13 14:34:34.570650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.992 [2024-10-13 14:34:34.581850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.992 [2024-10-13 14:34:34.581869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.992 [2024-10-13 14:34:34.581876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.992 [2024-10-13 14:34:34.593561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.992 [2024-10-13 14:34:34.593580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.992 [2024-10-13 14:34:34.593586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.992 [2024-10-13 14:34:34.601590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.992 [2024-10-13 14:34:34.601614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.992 [2024-10-13 14:34:34.601621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.992 [2024-10-13 14:34:34.610263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.992 [2024-10-13 14:34:34.610282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.993 [2024-10-13 14:34:34.610288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.993 [2024-10-13 14:34:34.619441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.993 [2024-10-13 14:34:34.619459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.993 [2024-10-13 14:34:34.619465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.993 [2024-10-13 14:34:34.629508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.993 [2024-10-13 14:34:34.629526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.993 [2024-10-13 14:34:34.629532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.993 [2024-10-13 14:34:34.639839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.993 [2024-10-13 14:34:34.639857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.993 [2024-10-13 14:34:34.639864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.993 [2024-10-13 14:34:34.650967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.993 [2024-10-13 14:34:34.650985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.993 [2024-10-13 14:34:34.650991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:30.993 [2024-10-13 14:34:34.662205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.993 [2024-10-13 14:34:34.662223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.993 [2024-10-13 14:34:34.662230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:30.993 [2024-10-13 14:34:34.671652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.993 [2024-10-13 14:34:34.671670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.993 [2024-10-13 14:34:34.671677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:30.993 [2024-10-13 14:34:34.679298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.993 [2024-10-13 14:34:34.679316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.993 [2024-10-13 14:34:34.679323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:30.993 [2024-10-13 14:34:34.690993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:30.993 [2024-10-13 14:34:34.691011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:30.993 [2024-10-13 14:34:34.691018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.254 [2024-10-13 14:34:34.702687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.254 [2024-10-13 14:34:34.702705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.254 [2024-10-13 14:34:34.702712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.254 [2024-10-13 14:34:34.713795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.254 [2024-10-13 14:34:34.713814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.254 [2024-10-13 14:34:34.713820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.254 [2024-10-13 14:34:34.722849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.254 [2024-10-13 14:34:34.722867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.254 [2024-10-13 14:34:34.722874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.254 [2024-10-13 14:34:34.733886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.254 [2024-10-13 14:34:34.733904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.254 [2024-10-13 14:34:34.733910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.254 [2024-10-13 14:34:34.744350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.254 [2024-10-13 14:34:34.744368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.254 [2024-10-13 14:34:34.744375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.254 [2024-10-13 14:34:34.755010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.254 [2024-10-13 14:34:34.755028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.254 [2024-10-13 14:34:34.755034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.254 [2024-10-13 14:34:34.763923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.254 [2024-10-13 14:34:34.763942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.254 [2024-10-13 14:34:34.763948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.254 [2024-10-13 14:34:34.774963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.254 [2024-10-13 14:34:34.774981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.254 [2024-10-13 14:34:34.774990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.254 [2024-10-13 14:34:34.785534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.254 [2024-10-13 14:34:34.785552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.254 [2024-10-13 14:34:34.785558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.254 [2024-10-13 14:34:34.795934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.254 [2024-10-13 14:34:34.795952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.254 [2024-10-13 14:34:34.795958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.254 [2024-10-13 14:34:34.803154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.254 [2024-10-13 14:34:34.803172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.254 [2024-10-13 14:34:34.803178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.254 [2024-10-13 14:34:34.811387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.254 [2024-10-13 14:34:34.811406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.254 [2024-10-13 14:34:34.811412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.254 [2024-10-13 14:34:34.821623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.254 [2024-10-13 14:34:34.821641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.254 [2024-10-13 14:34:34.821647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.254 [2024-10-13 14:34:34.832132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.254 [2024-10-13 14:34:34.832150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.254 [2024-10-13 14:34:34.832156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.254 [2024-10-13 14:34:34.843778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.254 [2024-10-13 14:34:34.843796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.254 [2024-10-13 14:34:34.843802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.254 [2024-10-13 14:34:34.852219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.254 [2024-10-13 14:34:34.852237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.254 [2024-10-13 14:34:34.852244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.255 [2024-10-13 14:34:34.859918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.255 [2024-10-13 14:34:34.859939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.255 [2024-10-13 14:34:34.859945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.255 [2024-10-13 14:34:34.871079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.255 [2024-10-13 14:34:34.871098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.255 [2024-10-13 14:34:34.871104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.255 [2024-10-13 14:34:34.883772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.255 [2024-10-13 14:34:34.883791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.255 [2024-10-13 14:34:34.883797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.255 [2024-10-13 14:34:34.893249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.255 [2024-10-13 14:34:34.893267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.255 [2024-10-13 14:34:34.893274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.255 [2024-10-13 14:34:34.903844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.255 [2024-10-13 14:34:34.903862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.255 [2024-10-13 14:34:34.903868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.255 [2024-10-13 14:34:34.915479] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.255 [2024-10-13 14:34:34.915497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.255 [2024-10-13 14:34:34.915504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.255 [2024-10-13 14:34:34.925470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.255 [2024-10-13 14:34:34.925488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.255 [2024-10-13 14:34:34.925494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.255 [2024-10-13 14:34:34.936458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.255 [2024-10-13 14:34:34.936477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.255 [2024-10-13 14:34:34.936483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.255 [2024-10-13 14:34:34.947983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.255 [2024-10-13 14:34:34.948001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.255 [2024-10-13 14:34:34.948007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.255 [2024-10-13 14:34:34.959275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.255 [2024-10-13 14:34:34.959294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.255 [2024-10-13 14:34:34.959300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.516 [2024-10-13 14:34:34.970333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.516 [2024-10-13 14:34:34.970352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.516 [2024-10-13 14:34:34.970358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.516 [2024-10-13 14:34:34.982020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.516 [2024-10-13 14:34:34.982038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.516 [2024-10-13 14:34:34.982044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.516 [2024-10-13 14:34:34.993178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.516 [2024-10-13 14:34:34.993197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.516 [2024-10-13 14:34:34.993203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.516 [2024-10-13 14:34:35.004446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.516 [2024-10-13 14:34:35.004464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.516 [2024-10-13 14:34:35.004470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.516 [2024-10-13 14:34:35.016328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.516 [2024-10-13 14:34:35.016346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.516 [2024-10-13 14:34:35.016352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.516 [2024-10-13 14:34:35.028114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.517 [2024-10-13 14:34:35.028132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.517 [2024-10-13 14:34:35.028138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.517 [2024-10-13 14:34:35.039833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.517 [2024-10-13 14:34:35.039851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.517 [2024-10-13 14:34:35.039858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.517 [2024-10-13 14:34:35.049759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.517 [2024-10-13 14:34:35.049778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.517 [2024-10-13 14:34:35.049787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.517 [2024-10-13 14:34:35.061897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.517 [2024-10-13 14:34:35.061915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.517 [2024-10-13 14:34:35.061922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.517 [2024-10-13 14:34:35.073465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.517 [2024-10-13 14:34:35.073483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.517 [2024-10-13 14:34:35.073489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.517 [2024-10-13 14:34:35.084306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.517 [2024-10-13 14:34:35.084324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.517 [2024-10-13 14:34:35.084330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.517 [2024-10-13 14:34:35.094351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.517 [2024-10-13 14:34:35.094369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.517 [2024-10-13 14:34:35.094375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.517 [2024-10-13 14:34:35.105670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.517 [2024-10-13 14:34:35.105688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.517 [2024-10-13 14:34:35.105694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.517 [2024-10-13 14:34:35.115897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.517 [2024-10-13 14:34:35.115915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.517 [2024-10-13 14:34:35.115922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.517 [2024-10-13 14:34:35.125574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.517 [2024-10-13 14:34:35.125592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.517 [2024-10-13 14:34:35.125598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.517 [2024-10-13 14:34:35.135912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.517 [2024-10-13 14:34:35.135930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.517 [2024-10-13 14:34:35.135936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.517 [2024-10-13 14:34:35.147374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.517 [2024-10-13 14:34:35.147396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.517 [2024-10-13 14:34:35.147402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.517 [2024-10-13 14:34:35.159527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.517 [2024-10-13 14:34:35.159546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.517 [2024-10-13 14:34:35.159553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.517 [2024-10-13 14:34:35.166565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.517 [2024-10-13 14:34:35.166584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.517 [2024-10-13 14:34:35.166590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.517 [2024-10-13 14:34:35.175870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.517 [2024-10-13 14:34:35.175888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.517 [2024-10-13 14:34:35.175894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.517 [2024-10-13 14:34:35.186324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.517 [2024-10-13 14:34:35.186342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.517 [2024-10-13 14:34:35.186348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.517 [2024-10-13 14:34:35.195074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.517 [2024-10-13 14:34:35.195091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.517 [2024-10-13 14:34:35.195097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.517 [2024-10-13 14:34:35.205465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.517 [2024-10-13 14:34:35.205482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.517 [2024-10-13 14:34:35.205489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.517 [2024-10-13 14:34:35.215555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.517 [2024-10-13 14:34:35.215572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.517 [2024-10-13 14:34:35.215579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.779 [2024-10-13 14:34:35.226588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.779 [2024-10-13 14:34:35.226606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.779 [2024-10-13 14:34:35.226612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.779 [2024-10-13 14:34:35.237532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.779 [2024-10-13 14:34:35.237551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.779 [2024-10-13 14:34:35.237557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.779 [2024-10-13 14:34:35.247841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.779 [2024-10-13 14:34:35.247859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.779 [2024-10-13 14:34:35.247866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.779 [2024-10-13 14:34:35.259086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.779 [2024-10-13 14:34:35.259104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.779 [2024-10-13 14:34:35.259110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.779 [2024-10-13 14:34:35.268657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.779 [2024-10-13 14:34:35.268674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.779 [2024-10-13 14:34:35.268680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.779 [2024-10-13 14:34:35.279736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.779 [2024-10-13 14:34:35.279753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.779 [2024-10-13 14:34:35.279759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.779 [2024-10-13 14:34:35.290175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.779 [2024-10-13 14:34:35.290193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.779 [2024-10-13 14:34:35.290200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.779 [2024-10-13 14:34:35.298452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.779 [2024-10-13 14:34:35.298469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.779 [2024-10-13 14:34:35.298476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.779 [2024-10-13 14:34:35.308634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.779 [2024-10-13 14:34:35.308652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.779 [2024-10-13 14:34:35.308659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.779 [2024-10-13 14:34:35.319888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.780 [2024-10-13 14:34:35.319909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.780 [2024-10-13 14:34:35.319916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.780 [2024-10-13 14:34:35.331769] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.780 [2024-10-13 14:34:35.331788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.780 [2024-10-13 14:34:35.331794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.780 [2024-10-13 14:34:35.342056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.780 [2024-10-13 14:34:35.342079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.780 [2024-10-13 14:34:35.342086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.780 [2024-10-13 14:34:35.353764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.780 [2024-10-13 14:34:35.353782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.780 [2024-10-13 14:34:35.353788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.780 [2024-10-13 14:34:35.364206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.780 [2024-10-13 14:34:35.364224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.780 [2024-10-13 14:34:35.364230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.780 [2024-10-13 14:34:35.375927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.780 [2024-10-13 14:34:35.375945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.780 [2024-10-13 14:34:35.375951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.780 [2024-10-13 14:34:35.385600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.780 [2024-10-13 14:34:35.385619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.780 [2024-10-13 14:34:35.385625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.780 [2024-10-13 14:34:35.397829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.780 [2024-10-13 14:34:35.397846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.780 [2024-10-13 14:34:35.397853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.780 [2024-10-13 14:34:35.406921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.780 [2024-10-13 14:34:35.406939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.780 [2024-10-13 14:34:35.406945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.780 [2024-10-13 14:34:35.415748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.780 [2024-10-13 14:34:35.415766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.780 [2024-10-13 14:34:35.415772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.780 2957.00 IOPS, 369.62 MiB/s [2024-10-13T12:34:35.487Z] [2024-10-13 14:34:35.426591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.780 [2024-10-13 14:34:35.426609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.780 [2024-10-13 14:34:35.426616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.780 [2024-10-13 14:34:35.436917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.780 [2024-10-13 14:34:35.436935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.780 [2024-10-13 14:34:35.436942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:31.780 [2024-10-13 14:34:35.448107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.780 [2024-10-13 14:34:35.448124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.780 [2024-10-13 14:34:35.448131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:31.780 [2024-10-13 14:34:35.458831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.780 [2024-10-13 14:34:35.458849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.780 [2024-10-13 14:34:35.458855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:31.780 [2024-10-13 14:34:35.468126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.780 [2024-10-13 14:34:35.468144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.780 [2024-10-13 14:34:35.468150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:31.780 [2024-10-13 14:34:35.477323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:31.780 [2024-10-13 14:34:35.477341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:31.780 [2024-10-13 14:34:35.477347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.041 [2024-10-13 14:34:35.487138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.041 [2024-10-13 14:34:35.487156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.041 [2024-10-13 14:34:35.487162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.041 [2024-10-13 14:34:35.495472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.041 [2024-10-13 14:34:35.495489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.041 [2024-10-13 14:34:35.495498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.041 [2024-10-13 14:34:35.505083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.041 [2024-10-13 14:34:35.505099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.041 [2024-10-13 14:34:35.505106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.041 [2024-10-13 14:34:35.515130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.041 [2024-10-13 14:34:35.515148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.041 [2024-10-13 14:34:35.515154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.041 [2024-10-13 14:34:35.526021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.041 [2024-10-13 14:34:35.526039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.041 [2024-10-13 14:34:35.526045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.041 [2024-10-13 14:34:35.538203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.041 [2024-10-13 14:34:35.538221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.041 [2024-10-13 14:34:35.538227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.041 [2024-10-13 14:34:35.549816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.041 [2024-10-13 14:34:35.549834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.041 [2024-10-13 14:34:35.549840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.041 [2024-10-13 14:34:35.562389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.041 [2024-10-13 14:34:35.562407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.041 [2024-10-13 14:34:35.562413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.042 [2024-10-13 14:34:35.574617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.042 [2024-10-13 14:34:35.574635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.042 [2024-10-13 14:34:35.574641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.042 [2024-10-13 14:34:35.586550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.042 [2024-10-13 14:34:35.586568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.042 [2024-10-13 14:34:35.586574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.042 [2024-10-13 14:34:35.595707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.042 [2024-10-13 14:34:35.595728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.042 [2024-10-13 14:34:35.595734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.042 [2024-10-13 14:34:35.607083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.042 [2024-10-13 14:34:35.607101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.042 [2024-10-13 14:34:35.607107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.042 [2024-10-13 14:34:35.618340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.042 [2024-10-13 14:34:35.618358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.042 [2024-10-13 14:34:35.618364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.042 [2024-10-13 14:34:35.631018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.042 [2024-10-13 14:34:35.631035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.042 [2024-10-13 14:34:35.631041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.042 [2024-10-13 14:34:35.642415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.042 [2024-10-13 14:34:35.642432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.042 [2024-10-13 14:34:35.642438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.042 [2024-10-13 14:34:35.654255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.042 [2024-10-13 14:34:35.654273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.042 [2024-10-13 14:34:35.654279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.042 [2024-10-13 14:34:35.665242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.042 [2024-10-13 14:34:35.665260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.042 [2024-10-13 14:34:35.665266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.042 [2024-10-13 14:34:35.676420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.042 [2024-10-13 14:34:35.676438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.042 [2024-10-13 14:34:35.676444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.042 [2024-10-13 14:34:35.687561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.042 [2024-10-13 14:34:35.687579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.042 [2024-10-13 14:34:35.687585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.042 [2024-10-13 14:34:35.699412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.042 [2024-10-13 14:34:35.699430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.042 [2024-10-13 14:34:35.699436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.042 [2024-10-13 14:34:35.709705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.042 [2024-10-13 14:34:35.709723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.042 [2024-10-13 14:34:35.709729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.042 [2024-10-13 14:34:35.721218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.042 [2024-10-13 14:34:35.721235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.042 [2024-10-13 14:34:35.721241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.042 [2024-10-13 14:34:35.730795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.042 [2024-10-13 14:34:35.730813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.042 [2024-10-13 14:34:35.730819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.042 [2024-10-13 14:34:35.743112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.042 [2024-10-13 14:34:35.743130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.042 [2024-10-13 14:34:35.743136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.303 [2024-10-13 14:34:35.752055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.752077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.752083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.762821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.762839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.762845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.772216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.772233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.772240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.783342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.783360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.783369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.794840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.794857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.794863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.804080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.804097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.804103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.815797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.815815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.815822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.824731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.824748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.824755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.835620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.835638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.835644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.846413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.846431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.846437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.856002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.856021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.856027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.867336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.867353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.867360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.879108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.879126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.879132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.890889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.890906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.890912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.901046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.901068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.901075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.912777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.912794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.912801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.921739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.921757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.921764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.928440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.928458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.928464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.938626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.938644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.938650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.947823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.947841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.947847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.956951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.956969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.956981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.966874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.966892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.966898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.976563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.976581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.976587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.988051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.988074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.988080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.304 [2024-10-13 14:34:35.999419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.304 [2024-10-13 14:34:35.999437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.304 [2024-10-13 14:34:35.999443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.010103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.010121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.010128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.021229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.021247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.021253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.032296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.032313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.032319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.041304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.041322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.041328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.052238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.052259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.052265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.064303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.064321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.064327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.075677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.075695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.075701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.087101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.087119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.087125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.096546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.096564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.096570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.107209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.107226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.107232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.115170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.115187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.115193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.121158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.121176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.121182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.131884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.131902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.131908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.142978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.142996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.143002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.153337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.153356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.153363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.164869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.164886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.164893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.175569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.175586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.175593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.187888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.187905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.187911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.200562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.200581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.200587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.213174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.213192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.213199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.566 [2024-10-13 14:34:36.225637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.566 [2024-10-13 14:34:36.225655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.566 [2024-10-13 14:34:36.225661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.567 [2024-10-13 14:34:36.234284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.567 [2024-10-13 14:34:36.234302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.567 [2024-10-13 14:34:36.234311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.567 [2024-10-13 14:34:36.244837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.567 [2024-10-13 14:34:36.244854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.567 [2024-10-13 14:34:36.244860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.567 [2024-10-13 14:34:36.255808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.567 [2024-10-13 14:34:36.255827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.567 [2024-10-13 14:34:36.255833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.567 [2024-10-13 14:34:36.266648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.567 [2024-10-13 14:34:36.266666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.567 [2024-10-13 14:34:36.266673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.828 [2024-10-13 14:34:36.274888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.828 [2024-10-13 14:34:36.274905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.828 [2024-10-13 14:34:36.274911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.828 [2024-10-13 14:34:36.282405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.828 [2024-10-13 14:34:36.282422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.828 [2024-10-13 14:34:36.282429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.828 [2024-10-13 14:34:36.293241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.828 [2024-10-13 14:34:36.293259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.828 [2024-10-13 14:34:36.293265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.829 [2024-10-13 14:34:36.302518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.829 [2024-10-13 14:34:36.302535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.829 [2024-10-13 14:34:36.302541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.829 [2024-10-13 14:34:36.308590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.829 [2024-10-13 14:34:36.308608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.829 [2024-10-13 14:34:36.308615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.829 [2024-10-13 14:34:36.316446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.829 [2024-10-13 14:34:36.316466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.829 [2024-10-13 14:34:36.316472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.829 [2024-10-13 14:34:36.327253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.829 [2024-10-13 14:34:36.327271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.829 [2024-10-13 14:34:36.327277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.829 [2024-10-13 14:34:36.338504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.829 [2024-10-13 14:34:36.338522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.829 [2024-10-13 14:34:36.338528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.829 [2024-10-13 14:34:36.349646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.829 [2024-10-13 14:34:36.349664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.829 [2024-10-13 14:34:36.349670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.829 [2024-10-13 14:34:36.361569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.829 [2024-10-13 14:34:36.361587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.829 [2024-10-13 14:34:36.361593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.829 [2024-10-13 14:34:36.374295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.829 [2024-10-13 14:34:36.374313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.829 [2024-10-13 14:34:36.374320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.829 [2024-10-13 14:34:36.386806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.829 [2024-10-13 14:34:36.386823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.829 [2024-10-13 14:34:36.386829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:32.829 [2024-10-13 14:34:36.397657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.829 [2024-10-13 14:34:36.397674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.829 [2024-10-13 14:34:36.397681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:32.829 [2024-10-13 14:34:36.409417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.829 [2024-10-13 14:34:36.409435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.829 [2024-10-13 14:34:36.409444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:32.829 2938.00 IOPS, 367.25 MiB/s [2024-10-13T12:34:36.536Z] [2024-10-13 14:34:36.422569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109f0b0) 00:38:32.829 [2024-10-13 14:34:36.422587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:32.829 [2024-10-13 14:34:36.422593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:32.829 00:38:32.829 Latency(us) 00:38:32.829 [2024-10-13T12:34:36.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:32.829 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:38:32.829 nvme0n1 : 2.01 2938.13 367.27 0.00 0.00 5441.45 1033.24 13247.34 00:38:32.829 [2024-10-13T12:34:36.536Z] =================================================================================================================== 00:38:32.829 [2024-10-13T12:34:36.536Z] Total : 2938.13 367.27 0.00 0.00 5441.45 1033.24 13247.34 00:38:32.829 { 00:38:32.829 "results": [ 00:38:32.829 { 00:38:32.829 "job": "nvme0n1", 00:38:32.829 "core_mask": "0x2", 00:38:32.829 "workload": "randread", 00:38:32.829 "status": "finished", 00:38:32.829 "queue_depth": 16, 00:38:32.829 "io_size": 131072, 00:38:32.829 "runtime": 2.005358, 00:38:32.829 "iops": 2938.128753070524, 00:38:32.829 "mibps": 367.2660941338155, 00:38:32.829 "io_failed": 0, 00:38:32.829 "io_timeout": 0, 00:38:32.829 "avg_latency_us": 5441.446380091678, 00:38:32.829 "min_latency_us": 1033.2375542933512, 00:38:32.829 "max_latency_us": 13247.337119946542 00:38:32.829 } 00:38:32.829 ], 00:38:32.829 "core_count": 1 00:38:32.829 } 00:38:32.829 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:32.829 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:32.829 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:32.829 | .driver_specific 00:38:32.829 | .nvme_error 00:38:32.829 | .status_code 00:38:32.829 | .command_transient_transport_error' 00:38:32.829 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 190 > 0 )) 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1975892 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1975892 ']' 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1975892 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1975892 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1975892' 00:38:33.091 killing process with pid 1975892 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1975892 00:38:33.091 Received shutdown signal, test time was about 2.000000 seconds 00:38:33.091 00:38:33.091 Latency(us) 00:38:33.091 [2024-10-13T12:34:36.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:33.091 [2024-10-13T12:34:36.798Z] =================================================================================================================== 00:38:33.091 [2024-10-13T12:34:36.798Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1975892 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1976666 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1976666 /var/tmp/bperf.sock 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1976666 ']' 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:33.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:33.091 14:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:33.352 [2024-10-13 14:34:36.836481] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:38:33.352 [2024-10-13 14:34:36.836539] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976666 ] 00:38:33.352 [2024-10-13 14:34:36.967215] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:33.352 [2024-10-13 14:34:37.013324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:33.352 [2024-10-13 14:34:37.029458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:34.294 14:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:34.294 14:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:38:34.294 14:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:34.294 14:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:34.294 14:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:34.294 14:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.294 14:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:34.294 14:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.294 14:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:34.294 14:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:34.555 nvme0n1 00:38:34.816 14:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:38:34.816 14:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.816 14:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:34.816 14:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.816 14:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:34.816 14:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:34.816 Running I/O for 2 seconds... 00:38:34.816 [2024-10-13 14:34:38.367080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f81e0 00:38:34.816 [2024-10-13 14:34:38.367843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.816 [2024-10-13 14:34:38.367868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:38:34.816 [2024-10-13 14:34:38.376111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f92c0 00:38:34.816 [2024-10-13 14:34:38.376812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.816 [2024-10-13 14:34:38.376831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:34.816 [2024-10-13 14:34:38.384629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fa3a0 00:38:34.816 [2024-10-13 14:34:38.385332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.816 [2024-10-13 14:34:38.385348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:34.816 [2024-10-13 14:34:38.393239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ea248 00:38:34.816 [2024-10-13 14:34:38.393975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.816 [2024-10-13 14:34:38.393991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:34.816 [2024-10-13 14:34:38.401766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166eb328 00:38:34.816 [2024-10-13 14:34:38.402516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.816 [2024-10-13 14:34:38.402532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:34.816 [2024-10-13 14:34:38.410268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ec408 00:38:34.816 [2024-10-13 14:34:38.410960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.816 [2024-10-13 14:34:38.410975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:34.816 [2024-10-13 14:34:38.418776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ed4e8 00:38:34.816 [2024-10-13 14:34:38.419537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.816 [2024-10-13 14:34:38.419553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:34.816 [2024-10-13 14:34:38.427296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ee5c8 00:38:34.816 [2024-10-13 14:34:38.428031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.816 [2024-10-13 14:34:38.428047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:34.816 [2024-10-13 14:34:38.435818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ef6a8 00:38:34.816 [2024-10-13 14:34:38.436556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.816 [2024-10-13 14:34:38.436572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:34.816 [2024-10-13 14:34:38.444332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f0788 00:38:34.816 [2024-10-13 14:34:38.445079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.816 [2024-10-13 14:34:38.445095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:34.816 [2024-10-13 14:34:38.452812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f1868 00:38:34.816 [2024-10-13 14:34:38.453564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.816 [2024-10-13 14:34:38.453580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:34.816 [2024-10-13 14:34:38.461296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f2948 00:38:34.816 [2024-10-13 14:34:38.462026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.816 [2024-10-13 14:34:38.462042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:34.816 [2024-10-13 14:34:38.469777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f3a28 00:38:34.816 [2024-10-13 14:34:38.470529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.816 [2024-10-13 14:34:38.470544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:34.816 [2024-10-13 14:34:38.478270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f4b08 00:38:34.816 [2024-10-13 14:34:38.479000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.816 [2024-10-13 14:34:38.479015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:34.816 [2024-10-13 14:34:38.486729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f5be8 00:38:34.816 [2024-10-13 14:34:38.487463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.816 [2024-10-13 14:34:38.487479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:34.816 [2024-10-13 14:34:38.495201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f6cc8 00:38:34.816 [2024-10-13 14:34:38.495949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.816 [2024-10-13 14:34:38.495964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:34.816 [2024-10-13 14:34:38.503692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f7da8 00:38:34.816 [2024-10-13 14:34:38.504438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.816 [2024-10-13 14:34:38.504453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:34.816 [2024-10-13 14:34:38.512186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f8e88 00:38:34.816 [2024-10-13 14:34:38.512917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.816 [2024-10-13 14:34:38.512933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:34.816 [2024-10-13 14:34:38.520674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f9f68 00:38:34.816 [2024-10-13 14:34:38.521414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:34.816 [2024-10-13 14:34:38.521430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.077 [2024-10-13 14:34:38.529139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fb048 00:38:35.077 [2024-10-13 14:34:38.529868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.077 [2024-10-13 14:34:38.529883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.077 [2024-10-13 14:34:38.537611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ea680 00:38:35.077 [2024-10-13 14:34:38.538359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.077 [2024-10-13 14:34:38.538375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.077 [2024-10-13 14:34:38.546082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166eb760 00:38:35.077 [2024-10-13 14:34:38.546805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.077 [2024-10-13 14:34:38.546820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.077 [2024-10-13 14:34:38.554581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ec840 00:38:35.077 [2024-10-13 14:34:38.555318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.077 [2024-10-13 14:34:38.555334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.077 [2024-10-13 14:34:38.563051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ed920 00:38:35.077 [2024-10-13 14:34:38.563790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.077 [2024-10-13 14:34:38.563811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.571513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166eea00 00:38:35.078 [2024-10-13 14:34:38.572243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.572258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.579979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166efae0 00:38:35.078 [2024-10-13 14:34:38.580720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.580736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.588450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f0bc0 00:38:35.078 [2024-10-13 14:34:38.589178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.589193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.596929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f1ca0 00:38:35.078 [2024-10-13 14:34:38.597679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.597695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.605435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f2d80 00:38:35.078 [2024-10-13 14:34:38.606145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.606161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.613931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f3e60 00:38:35.078 [2024-10-13 14:34:38.614671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.614687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.622407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f4f40 00:38:35.078 [2024-10-13 14:34:38.623142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.623157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.630869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f6020 00:38:35.078 [2024-10-13 14:34:38.631603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.631618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.639351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f7100 00:38:35.078 [2024-10-13 14:34:38.640057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.640076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.647820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f81e0 00:38:35.078 [2024-10-13 14:34:38.648559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.648575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.656288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f92c0 00:38:35.078 [2024-10-13 14:34:38.657034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.657050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.664769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fa3a0 00:38:35.078 [2024-10-13 14:34:38.665519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.665534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.673238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ea248 00:38:35.078 [2024-10-13 14:34:38.673960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.673975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.681706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166eb328 00:38:35.078 [2024-10-13 14:34:38.682432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.682448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.690202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ec408 00:38:35.078 [2024-10-13 14:34:38.690952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.690968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.698745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ed4e8 00:38:35.078 [2024-10-13 14:34:38.699480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.699495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.707248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ee5c8 00:38:35.078 [2024-10-13 14:34:38.707993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.708008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.715715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ef6a8 00:38:35.078 [2024-10-13 14:34:38.716403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.716419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.724411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e6b70 00:38:35.078 [2024-10-13 14:34:38.724943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.724959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.732756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166eee38 00:38:35.078 [2024-10-13 14:34:38.733238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.733254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.741619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166dfdc0 00:38:35.078 [2024-10-13 14:34:38.742462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.742477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.750029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166dece0 00:38:35.078 [2024-10-13 14:34:38.750876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.750891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.758505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e8d30 00:38:35.078 [2024-10-13 14:34:38.759408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.759424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.767072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fbcf0 00:38:35.078 [2024-10-13 14:34:38.767893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.767908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.078 [2024-10-13 14:34:38.775537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166feb58 00:38:35.078 [2024-10-13 14:34:38.776367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.078 [2024-10-13 14:34:38.776383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.339 [2024-10-13 14:34:38.784020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fe2e8 00:38:35.339 [2024-10-13 14:34:38.784845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.339 [2024-10-13 14:34:38.784863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.339 [2024-10-13 14:34:38.792504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166de8a8 00:38:35.339 [2024-10-13 14:34:38.793305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.339 [2024-10-13 14:34:38.793320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.339 [2024-10-13 14:34:38.800982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ee190 00:38:35.339 [2024-10-13 14:34:38.801825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.339 [2024-10-13 14:34:38.801840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.339 [2024-10-13 14:34:38.809452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ef270 00:38:35.339 [2024-10-13 14:34:38.810282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.339 [2024-10-13 14:34:38.810297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.339 [2024-10-13 14:34:38.817909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f0350 00:38:35.339 [2024-10-13 14:34:38.818741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.339 [2024-10-13 14:34:38.818756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.339 [2024-10-13 14:34:38.826433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f20d8 00:38:35.339 [2024-10-13 14:34:38.827229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.339 [2024-10-13 14:34:38.827244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.339 [2024-10-13 14:34:38.834914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e99d8 00:38:35.339 [2024-10-13 14:34:38.835746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.339 [2024-10-13 14:34:38.835761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.339 [2024-10-13 14:34:38.843390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e1b48 00:38:35.339 [2024-10-13 14:34:38.844174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.339 [2024-10-13 14:34:38.844189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.339 [2024-10-13 14:34:38.851861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e2c28 00:38:35.339 [2024-10-13 14:34:38.852688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.339 [2024-10-13 14:34:38.852703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.339 [2024-10-13 14:34:38.860313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e3d08 00:38:35.339 [2024-10-13 14:34:38.861134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.339 [2024-10-13 14:34:38.861152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.339 [2024-10-13 14:34:38.868761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fd640 00:38:35.339 [2024-10-13 14:34:38.869608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.339 [2024-10-13 14:34:38.869623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.339 [2024-10-13 14:34:38.877242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e0a68 00:38:35.339 [2024-10-13 14:34:38.878084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.339 [2024-10-13 14:34:38.878100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.339 [2024-10-13 14:34:38.885737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166df988 00:38:35.339 [2024-10-13 14:34:38.886522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.339 [2024-10-13 14:34:38.886538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.339 [2024-10-13 14:34:38.894252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e8088 00:38:35.339 [2024-10-13 14:34:38.895081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.340 [2024-10-13 14:34:38.895097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.340 [2024-10-13 14:34:38.902719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e9168 00:38:35.340 [2024-10-13 14:34:38.903545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.340 [2024-10-13 14:34:38.903561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.340 [2024-10-13 14:34:38.911182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fc128 00:38:35.340 [2024-10-13 14:34:38.912015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.340 [2024-10-13 14:34:38.912030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.340 [2024-10-13 14:34:38.919649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ff3c8 00:38:35.340 [2024-10-13 14:34:38.920475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.340 [2024-10-13 14:34:38.920491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.340 [2024-10-13 14:34:38.928122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ddc00 00:38:35.340 [2024-10-13 14:34:38.928948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.340 [2024-10-13 14:34:38.928963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.340 [2024-10-13 14:34:38.936620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166edd58 00:38:35.340 [2024-10-13 14:34:38.937461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.340 [2024-10-13 14:34:38.937476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.340 [2024-10-13 14:34:38.945105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166eee38 00:38:35.340 [2024-10-13 14:34:38.945930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.340 [2024-10-13 14:34:38.945945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.340 [2024-10-13 14:34:38.953596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166eff18 00:38:35.340 [2024-10-13 14:34:38.954426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.340 [2024-10-13 14:34:38.954442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.340 [2024-10-13 14:34:38.962079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f0ff8 00:38:35.340 [2024-10-13 14:34:38.962907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.340 [2024-10-13 14:34:38.962922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.340 [2024-10-13 14:34:38.970564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f2510 00:38:35.340 [2024-10-13 14:34:38.971350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.340 [2024-10-13 14:34:38.971365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.340 [2024-10-13 14:34:38.979045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e1710 00:38:35.340 [2024-10-13 14:34:38.979888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.340 [2024-10-13 14:34:38.979904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.340 [2024-10-13 14:34:38.987523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e27f0 00:38:35.340 [2024-10-13 14:34:38.988365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.340 [2024-10-13 14:34:38.988380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.340 [2024-10-13 14:34:38.995984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e38d0 00:38:35.340 [2024-10-13 14:34:38.996814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.340 [2024-10-13 14:34:38.996829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.340 [2024-10-13 14:34:39.004438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fda78 00:38:35.340 [2024-10-13 14:34:39.005277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.340 [2024-10-13 14:34:39.005293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.340 [2024-10-13 14:34:39.012908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e0ea0 00:38:35.340 [2024-10-13 14:34:39.013729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.340 [2024-10-13 14:34:39.013745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.340 [2024-10-13 14:34:39.021397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166dfdc0 00:38:35.340 [2024-10-13 14:34:39.022238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.340 [2024-10-13 14:34:39.022254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.340 [2024-10-13 14:34:39.029883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166dece0 00:38:35.340 [2024-10-13 14:34:39.030731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.340 [2024-10-13 14:34:39.030747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.340 [2024-10-13 14:34:39.038357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e8d30 00:38:35.340 [2024-10-13 14:34:39.039183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.340 [2024-10-13 14:34:39.039198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.602 [2024-10-13 14:34:39.046817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fbcf0 00:38:35.603 [2024-10-13 14:34:39.047656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.047672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.055301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166feb58 00:38:35.603 [2024-10-13 14:34:39.056082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.056098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.063773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fe2e8 00:38:35.603 [2024-10-13 14:34:39.064620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.064635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.072251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166de8a8 00:38:35.603 [2024-10-13 14:34:39.073101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.073117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.080710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ee190 00:38:35.603 [2024-10-13 14:34:39.081543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.081562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.089189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ef270 00:38:35.603 [2024-10-13 14:34:39.090028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.090043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.097661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f0350 00:38:35.603 [2024-10-13 14:34:39.098484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.098500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.106136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f20d8 00:38:35.603 [2024-10-13 14:34:39.106976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.106991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.114621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e99d8 00:38:35.603 [2024-10-13 14:34:39.115448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.115464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.123097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e1b48 00:38:35.603 [2024-10-13 14:34:39.123924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.123939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.131544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e2c28 00:38:35.603 [2024-10-13 14:34:39.132340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.132356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.140010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e3d08 00:38:35.603 [2024-10-13 14:34:39.140838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.140854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.148490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fd640 00:38:35.603 [2024-10-13 14:34:39.149309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.149325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.156959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e0a68 00:38:35.603 [2024-10-13 14:34:39.157782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.157797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.165434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166df988 00:38:35.603 [2024-10-13 14:34:39.166234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.166249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.173908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e8088 00:38:35.603 [2024-10-13 14:34:39.174752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.174768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.182358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e9168 00:38:35.603 [2024-10-13 14:34:39.183180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.183195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.190835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fc128 00:38:35.603 [2024-10-13 14:34:39.191626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.191642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.199316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ff3c8 00:38:35.603 [2024-10-13 14:34:39.200148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.200163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.207788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ddc00 00:38:35.603 [2024-10-13 14:34:39.208588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.208604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.216259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166edd58 00:38:35.603 [2024-10-13 14:34:39.217091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.217107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.224713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166eee38 00:38:35.603 [2024-10-13 14:34:39.225515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.225530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.233197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166eff18 00:38:35.603 [2024-10-13 14:34:39.234036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.234052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.241676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f0ff8 00:38:35.603 [2024-10-13 14:34:39.242516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.242531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.250158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f2510 00:38:35.603 [2024-10-13 14:34:39.250999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.251014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.258638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e1710 00:38:35.603 [2024-10-13 14:34:39.259468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.259483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.267119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e27f0 00:38:35.603 [2024-10-13 14:34:39.267919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.603 [2024-10-13 14:34:39.267935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.603 [2024-10-13 14:34:39.275568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e38d0 00:38:35.603 [2024-10-13 14:34:39.276393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.604 [2024-10-13 14:34:39.276408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.604 [2024-10-13 14:34:39.284033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fda78 00:38:35.604 [2024-10-13 14:34:39.284873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.604 [2024-10-13 14:34:39.284889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.604 [2024-10-13 14:34:39.292516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e0ea0 00:38:35.604 [2024-10-13 14:34:39.293331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.604 [2024-10-13 14:34:39.293346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.604 [2024-10-13 14:34:39.301002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166dfdc0 00:38:35.604 [2024-10-13 14:34:39.301833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.604 [2024-10-13 14:34:39.301851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.865 [2024-10-13 14:34:39.309478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166dece0 00:38:35.865 [2024-10-13 14:34:39.310269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.865 [2024-10-13 14:34:39.310284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.865 [2024-10-13 14:34:39.317937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e8d30 00:38:35.865 [2024-10-13 14:34:39.318779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.865 [2024-10-13 14:34:39.318795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.865 [2024-10-13 14:34:39.326421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fbcf0 00:38:35.865 [2024-10-13 14:34:39.327239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.865 [2024-10-13 14:34:39.327255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.865 [2024-10-13 14:34:39.334886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166feb58 00:38:35.865 [2024-10-13 14:34:39.335724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.865 [2024-10-13 14:34:39.335739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.865 [2024-10-13 14:34:39.343364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fe2e8 00:38:35.865 [2024-10-13 14:34:39.344153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.865 [2024-10-13 14:34:39.344169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.865 [2024-10-13 14:34:39.351834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166de8a8 00:38:35.865 [2024-10-13 14:34:39.352664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.865 [2024-10-13 14:34:39.352679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:35.865 29813.00 IOPS, 116.46 MiB/s [2024-10-13T12:34:39.572Z] [2024-10-13 14:34:39.360280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ed920 00:38:35.865 [2024-10-13 14:34:39.361094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.865 [2024-10-13 14:34:39.361109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.865 [2024-10-13 14:34:39.368901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166eea00 00:38:35.865 [2024-10-13 14:34:39.369721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.865 [2024-10-13 14:34:39.369736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.865 [2024-10-13 14:34:39.377379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166efae0 00:38:35.865 [2024-10-13 14:34:39.378212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.865 [2024-10-13 14:34:39.378227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.865 [2024-10-13 14:34:39.385855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f0bc0 00:38:35.865 [2024-10-13 14:34:39.386678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.865 [2024-10-13 14:34:39.386694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.865 [2024-10-13 14:34:39.394325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f2948 00:38:35.865 [2024-10-13 14:34:39.395124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.865 [2024-10-13 14:34:39.395139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.865 [2024-10-13 14:34:39.402854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e12d8 00:38:35.865 [2024-10-13 14:34:39.403668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.865 [2024-10-13 14:34:39.403683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.865 [2024-10-13 14:34:39.411312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e23b8 00:38:35.865 [2024-10-13 14:34:39.412131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.865 [2024-10-13 14:34:39.412147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.865 [2024-10-13 14:34:39.419764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e3498 00:38:35.865 [2024-10-13 14:34:39.420595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.865 [2024-10-13 14:34:39.420610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.865 [2024-10-13 14:34:39.428240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fe720 00:38:35.866 [2024-10-13 14:34:39.429056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.866 [2024-10-13 14:34:39.429076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.866 [2024-10-13 14:34:39.436714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fcdd0 00:38:35.866 [2024-10-13 14:34:39.437551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.866 [2024-10-13 14:34:39.437566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.866 [2024-10-13 14:34:39.445196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166df988 00:38:35.866 [2024-10-13 14:34:39.445977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.866 [2024-10-13 14:34:39.445995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.866 [2024-10-13 14:34:39.453648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e8088 00:38:35.866 [2024-10-13 14:34:39.454486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.866 [2024-10-13 14:34:39.454501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.866 [2024-10-13 14:34:39.462093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e9168 00:38:35.866 [2024-10-13 14:34:39.462926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.866 [2024-10-13 14:34:39.462941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.866 [2024-10-13 14:34:39.470565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fc128 00:38:35.866 [2024-10-13 14:34:39.471409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.866 [2024-10-13 14:34:39.471425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.866 [2024-10-13 14:34:39.479052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ff3c8 00:38:35.866 [2024-10-13 14:34:39.479883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.866 [2024-10-13 14:34:39.479899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.866 [2024-10-13 14:34:39.487527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ddc00 00:38:35.866 [2024-10-13 14:34:39.488352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.866 [2024-10-13 14:34:39.488367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.866 [2024-10-13 14:34:39.495998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166edd58 00:38:35.866 [2024-10-13 14:34:39.496837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.866 [2024-10-13 14:34:39.496852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.866 [2024-10-13 14:34:39.504463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166eee38 00:38:35.866 [2024-10-13 14:34:39.505250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.866 [2024-10-13 14:34:39.505266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.866 [2024-10-13 14:34:39.512921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166eff18 00:38:35.866 [2024-10-13 14:34:39.513769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.866 [2024-10-13 14:34:39.513784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.866 [2024-10-13 14:34:39.521396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f0ff8 00:38:35.866 [2024-10-13 14:34:39.522215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.866 [2024-10-13 14:34:39.522234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.866 [2024-10-13 14:34:39.529877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f2510 00:38:35.866 [2024-10-13 14:34:39.530706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.866 [2024-10-13 14:34:39.530721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.866 [2024-10-13 14:34:39.538357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e1710 00:38:35.866 [2024-10-13 14:34:39.539174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.866 [2024-10-13 14:34:39.539189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.866 [2024-10-13 14:34:39.546822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e27f0 00:38:35.866 [2024-10-13 14:34:39.547658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.866 [2024-10-13 14:34:39.547673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.866 [2024-10-13 14:34:39.555298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e38d0 00:38:35.866 [2024-10-13 14:34:39.556121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.866 [2024-10-13 14:34:39.556136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:35.866 [2024-10-13 14:34:39.563767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fda78 00:38:35.866 [2024-10-13 14:34:39.564587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:35.866 [2024-10-13 14:34:39.564602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.572244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e0ea0 00:38:36.128 [2024-10-13 14:34:39.573061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.573080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.580721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e0a68 00:38:36.128 [2024-10-13 14:34:39.581561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.581576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.589193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166df550 00:38:36.128 [2024-10-13 14:34:39.590013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.590028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.597650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e84c0 00:38:36.128 [2024-10-13 14:34:39.598483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.598498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.606114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fb480 00:38:36.128 [2024-10-13 14:34:39.606931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.606946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.614596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fc560 00:38:36.128 [2024-10-13 14:34:39.615412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.615427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.623076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fef90 00:38:36.128 [2024-10-13 14:34:39.623898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.623913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.631556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166de038 00:38:36.128 [2024-10-13 14:34:39.632367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.632382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.640017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ed920 00:38:36.128 [2024-10-13 14:34:39.640838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.640854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.648485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166eea00 00:38:36.128 [2024-10-13 14:34:39.649315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.649331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.657005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166efae0 00:38:36.128 [2024-10-13 14:34:39.657827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.657843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.665480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f0bc0 00:38:36.128 [2024-10-13 14:34:39.666278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.666293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.673947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f2948 00:38:36.128 [2024-10-13 14:34:39.674779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.674794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.682413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e12d8 00:38:36.128 [2024-10-13 14:34:39.683242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.683258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.690877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e23b8 00:38:36.128 [2024-10-13 14:34:39.691702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.691717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.699343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e3498 00:38:36.128 [2024-10-13 14:34:39.700131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.700146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.707818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fe720 00:38:36.128 [2024-10-13 14:34:39.708652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.708668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.716317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fcdd0 00:38:36.128 [2024-10-13 14:34:39.717127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.717142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.724843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166df988 00:38:36.128 [2024-10-13 14:34:39.725680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.725696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.734417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e8088 00:38:36.128 [2024-10-13 14:34:39.735707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.735723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.741930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e49b0 00:38:36.128 [2024-10-13 14:34:39.742509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.742531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.750790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e4de8 00:38:36.128 [2024-10-13 14:34:39.751740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.751756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.759186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e5ec8 00:38:36.128 [2024-10-13 14:34:39.760120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.760136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.767668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e6fa8 00:38:36.128 [2024-10-13 14:34:39.768579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.128 [2024-10-13 14:34:39.768595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.128 [2024-10-13 14:34:39.776141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f6cc8 00:38:36.128 [2024-10-13 14:34:39.777089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.129 [2024-10-13 14:34:39.777105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.129 [2024-10-13 14:34:39.784617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f5be8 00:38:36.129 [2024-10-13 14:34:39.785558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.129 [2024-10-13 14:34:39.785574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.129 [2024-10-13 14:34:39.793078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f4b08 00:38:36.129 [2024-10-13 14:34:39.794024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.129 [2024-10-13 14:34:39.794039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.129 [2024-10-13 14:34:39.801556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f2d80 00:38:36.129 [2024-10-13 14:34:39.802497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.129 [2024-10-13 14:34:39.802513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.129 [2024-10-13 14:34:39.810035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f8618 00:38:36.129 [2024-10-13 14:34:39.810969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.129 [2024-10-13 14:34:39.810985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.129 [2024-10-13 14:34:39.818517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f96f8 00:38:36.129 [2024-10-13 14:34:39.819452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.129 [2024-10-13 14:34:39.819467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.129 [2024-10-13 14:34:39.826991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fa7d8 00:38:36.129 [2024-10-13 14:34:39.827934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.129 [2024-10-13 14:34:39.827950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.389 [2024-10-13 14:34:39.835476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e9e10 00:38:36.389 [2024-10-13 14:34:39.836428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.389 [2024-10-13 14:34:39.836443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.389 [2024-10-13 14:34:39.844074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166eaef0 00:38:36.389 [2024-10-13 14:34:39.845012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.389 [2024-10-13 14:34:39.845027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.389 [2024-10-13 14:34:39.852559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ebfd0 00:38:36.389 [2024-10-13 14:34:39.853453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.389 [2024-10-13 14:34:39.853468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.389 [2024-10-13 14:34:39.861037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ed0b0 00:38:36.390 [2024-10-13 14:34:39.861993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:39.862009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:39.869510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e1b48 00:38:36.390 [2024-10-13 14:34:39.870434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:39.870449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:39.877970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e99d8 00:38:36.390 [2024-10-13 14:34:39.878910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:39.878926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:39.886439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e49b0 00:38:36.390 [2024-10-13 14:34:39.887391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:39.887407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:39.894943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e5a90 00:38:36.390 [2024-10-13 14:34:39.895899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:39.895915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:39.903431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e6b70 00:38:36.390 [2024-10-13 14:34:39.904360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:39.904377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:39.911900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f7100 00:38:36.390 [2024-10-13 14:34:39.912830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:39.912846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:39.920371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f6020 00:38:36.390 [2024-10-13 14:34:39.921269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:39.921285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:39.928834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f4f40 00:38:36.390 [2024-10-13 14:34:39.929765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:39.929781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:39.937324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f3e60 00:38:36.390 [2024-10-13 14:34:39.938243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:39.938258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:39.945798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f7970 00:38:36.390 [2024-10-13 14:34:39.946761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:39.946777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:39.954304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f8a50 00:38:36.390 [2024-10-13 14:34:39.955262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:39.955278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:39.962785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f9b30 00:38:36.390 [2024-10-13 14:34:39.963722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:39.963740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:39.971253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fac10 00:38:36.390 [2024-10-13 14:34:39.972158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:39.972174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:39.979730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166eaab8 00:38:36.390 [2024-10-13 14:34:39.980679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:39.980695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:39.988218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ebb98 00:38:36.390 [2024-10-13 14:34:39.989158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:39.989174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:39.996714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ecc78 00:38:36.390 [2024-10-13 14:34:39.997667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:39.997682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:40.005697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e7c50 00:38:36.390 [2024-10-13 14:34:40.006672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:40.006688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:40.014200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e1f80 00:38:36.390 [2024-10-13 14:34:40.015138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:40.015154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:40.022676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f1430 00:38:36.390 [2024-10-13 14:34:40.023572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:40.023588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:40.031197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e4de8 00:38:36.390 [2024-10-13 14:34:40.032132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:40.032148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:40.039691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e5ec8 00:38:36.390 [2024-10-13 14:34:40.040630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:40.040646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:40.048200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e6fa8 00:38:36.390 [2024-10-13 14:34:40.049132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:40.049149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:40.056697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f6cc8 00:38:36.390 [2024-10-13 14:34:40.057605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:40.057621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:40.064613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e8088 00:38:36.390 [2024-10-13 14:34:40.065544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:40.065560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:40.073992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e0630 00:38:36.390 [2024-10-13 14:34:40.075053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:40.075073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:40.082484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fd640 00:38:36.390 [2024-10-13 14:34:40.083515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:40.083531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:38:36.390 [2024-10-13 14:34:40.090422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fd208 00:38:36.390 [2024-10-13 14:34:40.091330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.390 [2024-10-13 14:34:40.091346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:38:36.652 [2024-10-13 14:34:40.099041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e4578 00:38:36.652 [2024-10-13 14:34:40.099976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.652 [2024-10-13 14:34:40.099993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:36.652 [2024-10-13 14:34:40.107512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166feb58 00:38:36.652 [2024-10-13 14:34:40.108455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.652 [2024-10-13 14:34:40.108471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:36.652 [2024-10-13 14:34:40.115981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fd208 00:38:36.652 [2024-10-13 14:34:40.116780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.652 [2024-10-13 14:34:40.116796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:36.652 [2024-10-13 14:34:40.124610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e0630 00:38:36.652 [2024-10-13 14:34:40.125554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.652 [2024-10-13 14:34:40.125570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.652 [2024-10-13 14:34:40.133095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fb048 00:38:36.652 [2024-10-13 14:34:40.134039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.134055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.141568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166df118 00:38:36.653 [2024-10-13 14:34:40.142517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.142533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.150026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e01f8 00:38:36.653 [2024-10-13 14:34:40.150980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.150996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.158497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f9b30 00:38:36.653 [2024-10-13 14:34:40.159447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.159463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.166967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e1b48 00:38:36.653 [2024-10-13 14:34:40.167913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.167929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.175449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e95a0 00:38:36.653 [2024-10-13 14:34:40.176371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.176387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.183920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166de038 00:38:36.653 [2024-10-13 14:34:40.184808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.184827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.192396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fef90 00:38:36.653 [2024-10-13 14:34:40.193313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.193329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.200853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fc560 00:38:36.653 [2024-10-13 14:34:40.201802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.201818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.209314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fb480 00:38:36.653 [2024-10-13 14:34:40.210268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.210284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.217799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166eee38 00:38:36.653 [2024-10-13 14:34:40.218709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.218725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.226278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166eff18 00:38:36.653 [2024-10-13 14:34:40.227220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.227236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.234759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f0ff8 00:38:36.653 [2024-10-13 14:34:40.235706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.235722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.243223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e2c28 00:38:36.653 [2024-10-13 14:34:40.244145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.244162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.251685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e3d08 00:38:36.653 [2024-10-13 14:34:40.252631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.252647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.260167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fd640 00:38:36.653 [2024-10-13 14:34:40.261115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.261131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.268643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f3a28 00:38:36.653 [2024-10-13 14:34:40.269572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.269589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.277113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fa3a0 00:38:36.653 [2024-10-13 14:34:40.278043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.278058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.285565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166dfdc0 00:38:36.653 [2024-10-13 14:34:40.286479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.286495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.294016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e23b8 00:38:36.653 [2024-10-13 14:34:40.294945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.294961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.302479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e8088 00:38:36.653 [2024-10-13 14:34:40.303404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.303420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.310967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166e1f80 00:38:36.653 [2024-10-13 14:34:40.311914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.311930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.319454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166edd58 00:38:36.653 [2024-10-13 14:34:40.320390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.320406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.327911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ddc00 00:38:36.653 [2024-10-13 14:34:40.328852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.328868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.336378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166ff3c8 00:38:36.653 [2024-10-13 14:34:40.337276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.337292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.344839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166fc128 00:38:36.653 [2024-10-13 14:34:40.345764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.345780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.653 [2024-10-13 14:34:40.353309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66200) with pdu=0x2000166f1ca0 00:38:36.653 [2024-10-13 14:34:40.354493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.653 [2024-10-13 14:34:40.354510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:36.914 29950.50 IOPS, 116.99 MiB/s 00:38:36.914 Latency(us) 00:38:36.914 [2024-10-13T12:34:40.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.915 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:36.915 nvme0n1 : 2.01 29956.57 117.02 0.00 0.00 4266.85 2148.59 12043.03 00:38:36.915 [2024-10-13T12:34:40.622Z] =================================================================================================================== 00:38:36.915 [2024-10-13T12:34:40.622Z] Total : 29956.57 117.02 0.00 0.00 4266.85 2148.59 12043.03 00:38:36.915 { 00:38:36.915 "results": [ 00:38:36.915 { 00:38:36.915 "job": "nvme0n1", 00:38:36.915 "core_mask": "0x2", 00:38:36.915 "workload": "randwrite", 00:38:36.915 "status": "finished", 00:38:36.915 "queue_depth": 128, 00:38:36.915 "io_size": 4096, 00:38:36.915 "runtime": 2.005937, 00:38:36.915 "iops": 29956.573910347135, 00:38:36.915 "mibps": 117.0178668372935, 00:38:36.915 "io_failed": 0, 00:38:36.915 "io_timeout": 0, 00:38:36.915 "avg_latency_us": 4266.853609924492, 00:38:36.915 "min_latency_us": 2148.5867023053793, 00:38:36.915 "max_latency_us": 12043.033745405946 00:38:36.915 } 00:38:36.915 ], 00:38:36.915 "core_count": 1 00:38:36.915 } 00:38:36.915 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:36.915 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:36.915 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:36.915 | .driver_specific 00:38:36.915 | .nvme_error 00:38:36.915 | .status_code 00:38:36.915 | .command_transient_transport_error' 00:38:36.915 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:36.915 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 235 > 0 )) 00:38:36.915 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1976666 00:38:36.915 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1976666 ']' 00:38:36.915 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1976666 00:38:36.915 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:38:36.915 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:36.915 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1976666 00:38:37.175 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:37.175 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:37.175 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1976666' 00:38:37.175 killing process with pid 1976666 00:38:37.175 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1976666 00:38:37.175 Received shutdown signal, test time was about 2.000000 seconds 00:38:37.175 00:38:37.175 Latency(us) 00:38:37.175 [2024-10-13T12:34:40.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:37.175 [2024-10-13T12:34:40.882Z] =================================================================================================================== 00:38:37.175 [2024-10-13T12:34:40.882Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:37.175 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1976666 00:38:37.175 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:38:37.175 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:38:37.175 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:38:37.175 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:38:37.175 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:38:37.175 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1977391 00:38:37.175 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1977391 /var/tmp/bperf.sock 00:38:37.175 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1977391 ']' 00:38:37.175 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:38:37.175 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:37.175 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:37.175 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:37.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:37.175 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:37.175 14:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:37.176 [2024-10-13 14:34:40.787478] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:38:37.176 [2024-10-13 14:34:40.787532] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977391 ] 00:38:37.176 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:37.176 Zero copy mechanism will not be used. 00:38:37.436 [2024-10-13 14:34:40.917910] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:37.436 [2024-10-13 14:34:40.967304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:37.436 [2024-10-13 14:34:40.981697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:38.008 14:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:38.008 14:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:38:38.008 14:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:38.008 14:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:38:38.268 14:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:38:38.268 14:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.268 14:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:38.268 14:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.268 14:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:38.268 14:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:38.529 nvme0n1 00:38:38.529 14:34:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:38:38.529 14:34:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.529 14:34:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:38.529 14:34:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.529 14:34:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:38:38.529 14:34:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:38.791 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:38.791 Zero copy mechanism will not be used. 00:38:38.791 Running I/O for 2 seconds... 00:38:38.791 [2024-10-13 14:34:42.260901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.791 [2024-10-13 14:34:42.261156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.791 [2024-10-13 14:34:42.261184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:38.791 [2024-10-13 14:34:42.272455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.791 [2024-10-13 14:34:42.272699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.791 [2024-10-13 14:34:42.272718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:38.791 [2024-10-13 14:34:42.283984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.791 [2024-10-13 14:34:42.284258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.791 [2024-10-13 14:34:42.284276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:38.791 [2024-10-13 14:34:42.292746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.791 [2024-10-13 14:34:42.293089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.791 [2024-10-13 14:34:42.293107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:38.791 [2024-10-13 14:34:42.298002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.791 [2024-10-13 14:34:42.298186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.791 [2024-10-13 14:34:42.298203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:38.791 [2024-10-13 14:34:42.306593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.791 [2024-10-13 14:34:42.306887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.791 [2024-10-13 14:34:42.306903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:38.791 [2024-10-13 14:34:42.314791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.791 [2024-10-13 14:34:42.314857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.791 [2024-10-13 14:34:42.314873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:38.791 [2024-10-13 14:34:42.322352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.791 [2024-10-13 14:34:42.322411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.791 [2024-10-13 14:34:42.322426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:38.791 [2024-10-13 14:34:42.328112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.791 [2024-10-13 14:34:42.328158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.791 [2024-10-13 14:34:42.328174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:38.791 [2024-10-13 14:34:42.335481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.791 [2024-10-13 14:34:42.335710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.791 [2024-10-13 14:34:42.335725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:38.791 [2024-10-13 14:34:42.341128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.341190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.341205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.349289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.349341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.349357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.357769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.358068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.358089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.366596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.366818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.366832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.372677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.372727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.372742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.378997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.379053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.379073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.384356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.384415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.384430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.389832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.389893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.389908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.396074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.396124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.396139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.405237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.405298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.405313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.412390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.412613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.412628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.418891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.419116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.419131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.427753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.427813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.427828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.434910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.434968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.434983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.439194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.439253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.439268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.448144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.448402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.448417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.454180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.454427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.454442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.462084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.462355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.462371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.466444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.466503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.466518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.472177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.472231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.472246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.480776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.480842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.480857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.489026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.489107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.489123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:38.792 [2024-10-13 14:34:42.493818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:38.792 [2024-10-13 14:34:42.493864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:38.792 [2024-10-13 14:34:42.493880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.054 [2024-10-13 14:34:42.499034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.054 [2024-10-13 14:34:42.499091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.054 [2024-10-13 14:34:42.499105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.054 [2024-10-13 14:34:42.506494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.054 [2024-10-13 14:34:42.506550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.054 [2024-10-13 14:34:42.506565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.054 [2024-10-13 14:34:42.514892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.054 [2024-10-13 14:34:42.515053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.054 [2024-10-13 14:34:42.515073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.054 [2024-10-13 14:34:42.523809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.054 [2024-10-13 14:34:42.523878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.054 [2024-10-13 14:34:42.523893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.054 [2024-10-13 14:34:42.534117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.054 [2024-10-13 14:34:42.534165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.054 [2024-10-13 14:34:42.534180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.054 [2024-10-13 14:34:42.541440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.054 [2024-10-13 14:34:42.541515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.054 [2024-10-13 14:34:42.541533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.054 [2024-10-13 14:34:42.546929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.054 [2024-10-13 14:34:42.547225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.054 [2024-10-13 14:34:42.547240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.054 [2024-10-13 14:34:42.554253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.054 [2024-10-13 14:34:42.554297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.054 [2024-10-13 14:34:42.554312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.054 [2024-10-13 14:34:42.561838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.054 [2024-10-13 14:34:42.561888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.054 [2024-10-13 14:34:42.561903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.054 [2024-10-13 14:34:42.568431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.054 [2024-10-13 14:34:42.568487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.054 [2024-10-13 14:34:42.568502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.054 [2024-10-13 14:34:42.576061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.054 [2024-10-13 14:34:42.576124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.054 [2024-10-13 14:34:42.576140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.054 [2024-10-13 14:34:42.583645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.054 [2024-10-13 14:34:42.583704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.054 [2024-10-13 14:34:42.583719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.054 [2024-10-13 14:34:42.592641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.054 [2024-10-13 14:34:42.592684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.054 [2024-10-13 14:34:42.592699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.054 [2024-10-13 14:34:42.603056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.054 [2024-10-13 14:34:42.603171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.054 [2024-10-13 14:34:42.603186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.054 [2024-10-13 14:34:42.611059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.054 [2024-10-13 14:34:42.611133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.054 [2024-10-13 14:34:42.611149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.054 [2024-10-13 14:34:42.621096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.054 [2024-10-13 14:34:42.621152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.054 [2024-10-13 14:34:42.621167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.054 [2024-10-13 14:34:42.631626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.054 [2024-10-13 14:34:42.631880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.054 [2024-10-13 14:34:42.631895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.054 [2024-10-13 14:34:42.642130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.054 [2024-10-13 14:34:42.642213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.054 [2024-10-13 14:34:42.642229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.054 [2024-10-13 14:34:42.653865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.054 [2024-10-13 14:34:42.654075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.054 [2024-10-13 14:34:42.654090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.055 [2024-10-13 14:34:42.664619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.055 [2024-10-13 14:34:42.664890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.055 [2024-10-13 14:34:42.664905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.055 [2024-10-13 14:34:42.676003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.055 [2024-10-13 14:34:42.676327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.055 [2024-10-13 14:34:42.676344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.055 [2024-10-13 14:34:42.687153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.055 [2024-10-13 14:34:42.687392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.055 [2024-10-13 14:34:42.687408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.055 [2024-10-13 14:34:42.699023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.055 [2024-10-13 14:34:42.699338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.055 [2024-10-13 14:34:42.699355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.055 [2024-10-13 14:34:42.710845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.055 [2024-10-13 14:34:42.711121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.055 [2024-10-13 14:34:42.711136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.055 [2024-10-13 14:34:42.721708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.055 [2024-10-13 14:34:42.721803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.055 [2024-10-13 14:34:42.721818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.055 [2024-10-13 14:34:42.732797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.055 [2024-10-13 14:34:42.733020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.055 [2024-10-13 14:34:42.733035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.055 [2024-10-13 14:34:42.743994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.055 [2024-10-13 14:34:42.744260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.055 [2024-10-13 14:34:42.744275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.055 [2024-10-13 14:34:42.755277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.055 [2024-10-13 14:34:42.755566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.055 [2024-10-13 14:34:42.755582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.316 [2024-10-13 14:34:42.766431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.316 [2024-10-13 14:34:42.766539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.316 [2024-10-13 14:34:42.766554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.316 [2024-10-13 14:34:42.777734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.316 [2024-10-13 14:34:42.777813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.316 [2024-10-13 14:34:42.777828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.316 [2024-10-13 14:34:42.788000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.316 [2024-10-13 14:34:42.788303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.316 [2024-10-13 14:34:42.788319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.316 [2024-10-13 14:34:42.798074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.316 [2024-10-13 14:34:42.798359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.316 [2024-10-13 14:34:42.798378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.316 [2024-10-13 14:34:42.808266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.316 [2024-10-13 14:34:42.808513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.316 [2024-10-13 14:34:42.808529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.316 [2024-10-13 14:34:42.817614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.316 [2024-10-13 14:34:42.817765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.316 [2024-10-13 14:34:42.817780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.316 [2024-10-13 14:34:42.826975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.316 [2024-10-13 14:34:42.827130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.316 [2024-10-13 14:34:42.827146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.316 [2024-10-13 14:34:42.836558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.316 [2024-10-13 14:34:42.836873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.316 [2024-10-13 14:34:42.836889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.316 [2024-10-13 14:34:42.841116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.316 [2024-10-13 14:34:42.841180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.316 [2024-10-13 14:34:42.841195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:42.846837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:42.846907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:42.846922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:42.852216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:42.852283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:42.852298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:42.861027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:42.861291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:42.861307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:42.868985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:42.869038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:42.869054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:42.876274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:42.876345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:42.876361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:42.885652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:42.885705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:42.885720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:42.893707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:42.893755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:42.893770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:42.901416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:42.901475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:42.901491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:42.910818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:42.910874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:42.910890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:42.918052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:42.918118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:42.918133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:42.923751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:42.923802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:42.923817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:42.934620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:42.934797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:42.934816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:42.941604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:42.941662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:42.941677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:42.948856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:42.949111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:42.949126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:42.959120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:42.959365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:42.959380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:42.969910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:42.970179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:42.970196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:42.981275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:42.981349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:42.981365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:42.992792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:42.993093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:42.993109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:43.004236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:43.004456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:43.004471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.317 [2024-10-13 14:34:43.015349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.317 [2024-10-13 14:34:43.015587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.317 [2024-10-13 14:34:43.015602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.025886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.026184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.026200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.035758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.035989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.036004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.045812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.046069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.046085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.056208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.056320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.056335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.066425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.066625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.066640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.076899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.077125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.077141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.087641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.087882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.087905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.098003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.098246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.098263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.108020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.108323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.108339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.118377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.118664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.118680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.127477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.127793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.127809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.137246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.137512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.137536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.146895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.147157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.147173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.156393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.156602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.156616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.167398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.167671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.167687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.177205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.177474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.177491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.187407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.187469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.187484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.193857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.193918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.193936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.199798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.199847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.199862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.203765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.204070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.204086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.212442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.212531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.212546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.579 [2024-10-13 14:34:43.218372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.579 [2024-10-13 14:34:43.218640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.579 [2024-10-13 14:34:43.218656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.580 [2024-10-13 14:34:43.223245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.580 [2024-10-13 14:34:43.223315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.580 [2024-10-13 14:34:43.223331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.580 [2024-10-13 14:34:43.229548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.580 [2024-10-13 14:34:43.229608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.580 [2024-10-13 14:34:43.229623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.580 [2024-10-13 14:34:43.237963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.580 [2024-10-13 14:34:43.238256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.580 [2024-10-13 14:34:43.238272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.580 3570.00 IOPS, 446.25 MiB/s [2024-10-13T12:34:43.287Z] [2024-10-13 14:34:43.245374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.580 [2024-10-13 14:34:43.245470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.580 [2024-10-13 14:34:43.245485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.580 [2024-10-13 14:34:43.250483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.580 [2024-10-13 14:34:43.250759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.580 [2024-10-13 14:34:43.250775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.580 [2024-10-13 14:34:43.258751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.580 [2024-10-13 14:34:43.258868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.580 [2024-10-13 14:34:43.258883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.580 [2024-10-13 14:34:43.262218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.580 [2024-10-13 14:34:43.262299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.580 [2024-10-13 14:34:43.262314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.580 [2024-10-13 14:34:43.265754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.580 [2024-10-13 14:34:43.265857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.580 [2024-10-13 14:34:43.265872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.580 [2024-10-13 14:34:43.269316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.580 [2024-10-13 14:34:43.269434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.580 [2024-10-13 14:34:43.269449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.580 [2024-10-13 14:34:43.272764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.580 [2024-10-13 14:34:43.272856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.580 [2024-10-13 14:34:43.272871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.580 [2024-10-13 14:34:43.276218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.580 [2024-10-13 14:34:43.276310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.580 [2024-10-13 14:34:43.276326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.580 [2024-10-13 14:34:43.279651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.580 [2024-10-13 14:34:43.279736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.580 [2024-10-13 14:34:43.279751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.580 [2024-10-13 14:34:43.283233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.580 [2024-10-13 14:34:43.283329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.580 [2024-10-13 14:34:43.283344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.842 [2024-10-13 14:34:43.286701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.842 [2024-10-13 14:34:43.286759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.842 [2024-10-13 14:34:43.286774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.842 [2024-10-13 14:34:43.290201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.842 [2024-10-13 14:34:43.290315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.842 [2024-10-13 14:34:43.290330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.842 [2024-10-13 14:34:43.293672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.842 [2024-10-13 14:34:43.293768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.842 [2024-10-13 14:34:43.293783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.842 [2024-10-13 14:34:43.297041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.842 [2024-10-13 14:34:43.297123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.842 [2024-10-13 14:34:43.297138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.842 [2024-10-13 14:34:43.299810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.299861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.299877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.302417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.302489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.302504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.307114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.307192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.307207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.310819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.310975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.310990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.314873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.314935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.314950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.318488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.318557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.318572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.324562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.324639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.324654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.327106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.327164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.327179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.329638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.329707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.329724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.332179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.332232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.332247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.334740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.334805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.334820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.337254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.337297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.337312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.339729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.339773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.339788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.342242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.342303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.342318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.344774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.344826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.344841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.349005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.349100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.349115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.353947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.354012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.354027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.356414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.356458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.356473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.358949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.358992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.359007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.361896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.361947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.361962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.365737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.365786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.365801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.368389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.368444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.368463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.371001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.371072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.371087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.374087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.374168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.374183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.378866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.378937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.378952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.381959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.382038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.382053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.385004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.385068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.385083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.387523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.387591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.387606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.843 [2024-10-13 14:34:43.389983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.843 [2024-10-13 14:34:43.390047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.843 [2024-10-13 14:34:43.390069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.392470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.392526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.392541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.394934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.395000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.395015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.397427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.397489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.397504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.399888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.399940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.399955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.402333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.402402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.402417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.405620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.405718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.405733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.414458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.414658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.414672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.424667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.424905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.424921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.433796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.433960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.433975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.444455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.444706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.444722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.454518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.454805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.454821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.465162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.465372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.465388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.475664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.475938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.475955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.485430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.485647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.485662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.495889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.495986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.496001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.505812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.506000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.506014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.516314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.516591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.516607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.526344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.526584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.526599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.534145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.534211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.534229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.537530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.537574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.537589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.540782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.540827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.540843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:39.844 [2024-10-13 14:34:43.543693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:39.844 [2024-10-13 14:34:43.543902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:39.844 [2024-10-13 14:34:43.543917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.106 [2024-10-13 14:34:43.547509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.106 [2024-10-13 14:34:43.547568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.106 [2024-10-13 14:34:43.547583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.106 [2024-10-13 14:34:43.550609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.106 [2024-10-13 14:34:43.550654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.106 [2024-10-13 14:34:43.550670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.106 [2024-10-13 14:34:43.553703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.106 [2024-10-13 14:34:43.553771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.106 [2024-10-13 14:34:43.553787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.106 [2024-10-13 14:34:43.561197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.561412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.561427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.569634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.569936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.569952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.577989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.578069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.578084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.586874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.586945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.586960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.593977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.594269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.594285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.598163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.598212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.598227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.602594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.602653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.602668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.608947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.608992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.609008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.611706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.611763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.611778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.614404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.614474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.614489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.617927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.618025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.618040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.620689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.620745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.620760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.623380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.623446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.623461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.626108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.626168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.626184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.628891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.628937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.628952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.631582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.631626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.631641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.634105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.634158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.634172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.636771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.636826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.636841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.639254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.639303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.639318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.641734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.641778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.641799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.644235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.644280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.644295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.646752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.646806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.646821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.649409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.649476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.649491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.654539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.654595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.654610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.661598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.661727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.661743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.670107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.670354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.670369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.107 [2024-10-13 14:34:43.677381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.107 [2024-10-13 14:34:43.677676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.107 [2024-10-13 14:34:43.677693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.108 [2024-10-13 14:34:43.683615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.108 [2024-10-13 14:34:43.683676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.108 [2024-10-13 14:34:43.683692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.108 [2024-10-13 14:34:43.690102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.108 [2024-10-13 14:34:43.690161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.108 [2024-10-13 14:34:43.690176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.108 [2024-10-13 14:34:43.693089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.108 [2024-10-13 14:34:43.693155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.108 [2024-10-13 14:34:43.693170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.108 [2024-10-13 14:34:43.699428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.108 [2024-10-13 14:34:43.699697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.108 [2024-10-13 14:34:43.699712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.108 [2024-10-13 14:34:43.707249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.108 [2024-10-13 14:34:43.707422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.108 [2024-10-13 14:34:43.707438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.108 [2024-10-13 14:34:43.710781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.108 [2024-10-13 14:34:43.710827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.108 [2024-10-13 14:34:43.710842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.108 [2024-10-13 14:34:43.717590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.108 [2024-10-13 14:34:43.717641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.108 [2024-10-13 14:34:43.717657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.108 [2024-10-13 14:34:43.722485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.108 [2024-10-13 14:34:43.722734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.108 [2024-10-13 14:34:43.722750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.108 [2024-10-13 14:34:43.725705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.108 [2024-10-13 14:34:43.725750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.108 [2024-10-13 14:34:43.725766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.108 [2024-10-13 14:34:43.729689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.108 [2024-10-13 14:34:43.729961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.108 [2024-10-13 14:34:43.729980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.108 [2024-10-13 14:34:43.734577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.108 [2024-10-13 14:34:43.734648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.108 [2024-10-13 14:34:43.734664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.108 [2024-10-13 14:34:43.742452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.108 [2024-10-13 14:34:43.742503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.108 [2024-10-13 14:34:43.742518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.108 [2024-10-13 14:34:43.752370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.108 [2024-10-13 14:34:43.752419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.108 [2024-10-13 14:34:43.752434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.108 [2024-10-13 14:34:43.762096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.108 [2024-10-13 14:34:43.762406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.108 [2024-10-13 14:34:43.762422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.108 [2024-10-13 14:34:43.770552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.108 [2024-10-13 14:34:43.770868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.108 [2024-10-13 14:34:43.770884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.108 [2024-10-13 14:34:43.780442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.108 [2024-10-13 14:34:43.780717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.108 [2024-10-13 14:34:43.780732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.108 [2024-10-13 14:34:43.791191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.108 [2024-10-13 14:34:43.791510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.108 [2024-10-13 14:34:43.791526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.108 [2024-10-13 14:34:43.801868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.108 [2024-10-13 14:34:43.802083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.108 [2024-10-13 14:34:43.802098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.108 [2024-10-13 14:34:43.809266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.108 [2024-10-13 14:34:43.809338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.108 [2024-10-13 14:34:43.809353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.812018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.812084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.812099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.814733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.814775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.814790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.817553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.817606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.817621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.820299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.820342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.820358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.822930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.823003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.823018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.825593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.825660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.825675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.828213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.828266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.828281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.830652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.830706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.830721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.833097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.833150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.833165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.835565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.835611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.835626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.838010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.838057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.838078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.840466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.840510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.840525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.842882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.842925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.842941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.845346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.845395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.845410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.847772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.847816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.847831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.850189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.850247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.850262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.852604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.852650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.852668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.855019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.855075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.855090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.857444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.857488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.857504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.862568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.372 [2024-10-13 14:34:43.862846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.372 [2024-10-13 14:34:43.862862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.372 [2024-10-13 14:34:43.868001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:43.868061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:43.868081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:43.870900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:43.870978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:43.870992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:43.874047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:43.874123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:43.874139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:43.882420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:43.882465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:43.882480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:43.891870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:43.892038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:43.892053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:43.903336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:43.903513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:43.903528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:43.913915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:43.914031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:43.914046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:43.924457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:43.924731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:43.924747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:43.934698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:43.934950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:43.934965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:43.945303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:43.945603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:43.945619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:43.956122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:43.956369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:43.956384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:43.966685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:43.966927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:43.966943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:43.976959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:43.977245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:43.977261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:43.987684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:43.987873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:43.987888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:43.997972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:43.998250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:43.998267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:44.004441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:44.004509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:44.004524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:44.011076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:44.011352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:44.011367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:44.018664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:44.018725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:44.018741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:44.023390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:44.023446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:44.023461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:44.026828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:44.026900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:44.026915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:44.030433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:44.030477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:44.030493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:44.033127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.373 [2024-10-13 14:34:44.033190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.373 [2024-10-13 14:34:44.033205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.373 [2024-10-13 14:34:44.035796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.374 [2024-10-13 14:34:44.035839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.374 [2024-10-13 14:34:44.035858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.374 [2024-10-13 14:34:44.038716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.374 [2024-10-13 14:34:44.038782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.374 [2024-10-13 14:34:44.038798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.374 [2024-10-13 14:34:44.041324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.374 [2024-10-13 14:34:44.041389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.374 [2024-10-13 14:34:44.041404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.374 [2024-10-13 14:34:44.045396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.374 [2024-10-13 14:34:44.045488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.374 [2024-10-13 14:34:44.045503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.374 [2024-10-13 14:34:44.050104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.374 [2024-10-13 14:34:44.050173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.374 [2024-10-13 14:34:44.050188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.374 [2024-10-13 14:34:44.052605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.374 [2024-10-13 14:34:44.052656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.374 [2024-10-13 14:34:44.052672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.374 [2024-10-13 14:34:44.055246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.374 [2024-10-13 14:34:44.055308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.374 [2024-10-13 14:34:44.055323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.374 [2024-10-13 14:34:44.061540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.374 [2024-10-13 14:34:44.061804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.374 [2024-10-13 14:34:44.061820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.374 [2024-10-13 14:34:44.067632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.374 [2024-10-13 14:34:44.067685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.374 [2024-10-13 14:34:44.067700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.374 [2024-10-13 14:34:44.070546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.374 [2024-10-13 14:34:44.070592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.374 [2024-10-13 14:34:44.070607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.636 [2024-10-13 14:34:44.077059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.636 [2024-10-13 14:34:44.077273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.636 [2024-10-13 14:34:44.077288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.636 [2024-10-13 14:34:44.084948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.636 [2024-10-13 14:34:44.085068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.636 [2024-10-13 14:34:44.085083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.636 [2024-10-13 14:34:44.092701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.636 [2024-10-13 14:34:44.092903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.636 [2024-10-13 14:34:44.092919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.636 [2024-10-13 14:34:44.100369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.636 [2024-10-13 14:34:44.100446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.636 [2024-10-13 14:34:44.100462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.636 [2024-10-13 14:34:44.107850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.636 [2024-10-13 14:34:44.107899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.636 [2024-10-13 14:34:44.107914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.636 [2024-10-13 14:34:44.110996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.636 [2024-10-13 14:34:44.111047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.636 [2024-10-13 14:34:44.111068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.636 [2024-10-13 14:34:44.114761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.636 [2024-10-13 14:34:44.114845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.636 [2024-10-13 14:34:44.114860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.636 [2024-10-13 14:34:44.117731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.117811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.117827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.120297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.120361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.120377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.122818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.122884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.122899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.125390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.125456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.125471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.128329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.128420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.128435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.131523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.131595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.131610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.134051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.134153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.134168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.136533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.136587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.136602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.139095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.139146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.139161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.141794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.141863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.141878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.144608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.144653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.144668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.152214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.152532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.152548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.160045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.160151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.160167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.168975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.169029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.169044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.174188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.174408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.174423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.179496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.179555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.179570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.182206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.182253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.182268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.184818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.184864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.184879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.187425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.187469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.187483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.190003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.190055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.190075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.192630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.192674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.192689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.195230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.195300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.195315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.197889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.197943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.197958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.200407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.200465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.200480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.202850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.202904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.202919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.205294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.205344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.205360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.207765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.207814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.207830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.210225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.210274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.637 [2024-10-13 14:34:44.210289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.637 [2024-10-13 14:34:44.212651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.637 [2024-10-13 14:34:44.212705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.638 [2024-10-13 14:34:44.212719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.638 [2024-10-13 14:34:44.215086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.638 [2024-10-13 14:34:44.215149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.638 [2024-10-13 14:34:44.215164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.638 [2024-10-13 14:34:44.217515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.638 [2024-10-13 14:34:44.217560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.638 [2024-10-13 14:34:44.217576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.638 [2024-10-13 14:34:44.219938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.638 [2024-10-13 14:34:44.219989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.638 [2024-10-13 14:34:44.220004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.638 [2024-10-13 14:34:44.222376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.638 [2024-10-13 14:34:44.222426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.638 [2024-10-13 14:34:44.222440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.638 [2024-10-13 14:34:44.225356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.638 [2024-10-13 14:34:44.225426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.638 [2024-10-13 14:34:44.225441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.638 [2024-10-13 14:34:44.231801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.638 [2024-10-13 14:34:44.232082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.638 [2024-10-13 14:34:44.232098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.638 5019.00 IOPS, 627.38 MiB/s [2024-10-13T12:34:44.345Z] [2024-10-13 14:34:44.242342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f66540) with pdu=0x2000166fef90 00:38:40.638 [2024-10-13 14:34:44.242629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.638 [2024-10-13 14:34:44.242645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.638 00:38:40.638 Latency(us) 00:38:40.638 [2024-10-13T12:34:44.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:40.638 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:38:40.638 nvme0n1 : 2.01 5012.58 626.57 0.00 0.00 3186.13 1129.03 15327.50 00:38:40.638 [2024-10-13T12:34:44.345Z] =================================================================================================================== 00:38:40.638 [2024-10-13T12:34:44.345Z] Total : 5012.58 626.57 0.00 0.00 3186.13 1129.03 15327.50 00:38:40.638 { 00:38:40.638 "results": [ 00:38:40.638 { 00:38:40.638 "job": "nvme0n1", 00:38:40.638 "core_mask": "0x2", 00:38:40.638 "workload": "randwrite", 00:38:40.638 "status": "finished", 00:38:40.638 "queue_depth": 16, 00:38:40.638 "io_size": 131072, 00:38:40.638 "runtime": 2.006353, 00:38:40.638 "iops": 5012.577547420618, 00:38:40.638 "mibps": 626.5721934275773, 00:38:40.638 "io_failed": 0, 00:38:40.638 "io_timeout": 0, 00:38:40.638 "avg_latency_us": 3186.1297334229307, 00:38:40.638 "min_latency_us": 1129.0344136318076, 00:38:40.638 "max_latency_us": 15327.497494153024 00:38:40.638 } 00:38:40.638 ], 00:38:40.638 "core_count": 1 00:38:40.638 } 00:38:40.638 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:38:40.638 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:38:40.638 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:38:40.638 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:38:40.638 | .driver_specific 00:38:40.638 | .nvme_error 00:38:40.638 | .status_code 00:38:40.638 | .command_transient_transport_error' 00:38:40.899 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 324 > 0 )) 00:38:40.899 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1977391 00:38:40.899 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1977391 ']' 00:38:40.899 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1977391 00:38:40.899 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:38:40.899 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:40.899 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1977391 00:38:40.899 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:40.900 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:40.900 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1977391' 00:38:40.900 killing process with pid 1977391 00:38:40.900 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1977391 00:38:40.900 Received shutdown signal, test time was about 2.000000 seconds 00:38:40.900 00:38:40.900 Latency(us) 00:38:40.900 [2024-10-13T12:34:44.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:40.900 [2024-10-13T12:34:44.607Z] =================================================================================================================== 00:38:40.900 [2024-10-13T12:34:44.607Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:40.900 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1977391 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1974994 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1974994 ']' 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1974994 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1974994 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1974994' 00:38:41.160 killing process with pid 1974994 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1974994 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1974994 00:38:41.160 00:38:41.160 real 0m16.547s 00:38:41.160 user 0m32.274s 00:38:41.160 sys 0m3.640s 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:38:41.160 ************************************ 00:38:41.160 END TEST nvmf_digest_error 00:38:41.160 ************************************ 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:41.160 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:41.160 rmmod nvme_tcp 00:38:41.161 rmmod nvme_fabrics 00:38:41.161 rmmod nvme_keyring 00:38:41.421 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:41.421 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:38:41.421 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:38:41.421 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 1974994 ']' 00:38:41.421 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 1974994 00:38:41.421 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1974994 ']' 00:38:41.421 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1974994 00:38:41.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1974994) - No such process 00:38:41.421 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1974994 is not found' 00:38:41.421 Process with pid 1974994 is not found 00:38:41.421 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:41.421 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:41.421 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:41.421 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:38:41.421 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:38:41.421 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:41.421 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:38:41.421 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:41.421 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:41.422 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.422 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:41.422 14:34:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:43.335 14:34:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:43.335 00:38:43.335 real 0m43.107s 00:38:43.335 user 1m6.600s 00:38:43.335 sys 0m12.991s 00:38:43.335 14:34:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:43.335 14:34:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:43.335 ************************************ 00:38:43.335 END TEST nvmf_digest 00:38:43.335 ************************************ 00:38:43.335 14:34:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:38:43.335 14:34:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:38:43.335 14:34:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:38:43.335 14:34:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:38:43.335 14:34:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:43.335 14:34:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:43.335 14:34:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.597 ************************************ 00:38:43.597 START TEST nvmf_bdevperf 00:38:43.597 ************************************ 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:38:43.597 * Looking for test storage... 00:38:43.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:43.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.597 --rc genhtml_branch_coverage=1 00:38:43.597 --rc genhtml_function_coverage=1 00:38:43.597 --rc genhtml_legend=1 00:38:43.597 --rc geninfo_all_blocks=1 00:38:43.597 --rc geninfo_unexecuted_blocks=1 00:38:43.597 00:38:43.597 ' 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:43.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.597 --rc genhtml_branch_coverage=1 00:38:43.597 --rc genhtml_function_coverage=1 00:38:43.597 --rc genhtml_legend=1 00:38:43.597 --rc geninfo_all_blocks=1 00:38:43.597 --rc geninfo_unexecuted_blocks=1 00:38:43.597 00:38:43.597 ' 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:43.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.597 --rc genhtml_branch_coverage=1 00:38:43.597 --rc genhtml_function_coverage=1 00:38:43.597 --rc genhtml_legend=1 00:38:43.597 --rc geninfo_all_blocks=1 00:38:43.597 --rc geninfo_unexecuted_blocks=1 00:38:43.597 00:38:43.597 ' 00:38:43.597 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:43.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.597 --rc genhtml_branch_coverage=1 00:38:43.597 --rc genhtml_function_coverage=1 00:38:43.597 --rc genhtml_legend=1 00:38:43.598 --rc geninfo_all_blocks=1 00:38:43.598 --rc geninfo_unexecuted_blocks=1 00:38:43.598 00:38:43.598 ' 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:43.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:38:43.598 14:34:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:51.744 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:51.744 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:51.744 Found net devices under 0000:31:00.0: cvl_0_0 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:51.744 Found net devices under 0000:31:00.1: cvl_0_1 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:51.744 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:51.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:51.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:38:51.744 00:38:51.744 --- 10.0.0.2 ping statistics --- 00:38:51.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:51.745 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:51.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:51.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:38:51.745 00:38:51.745 --- 10.0.0.1 ping statistics --- 00:38:51.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:51.745 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1982341 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1982341 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1982341 ']' 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:51.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:51.745 14:34:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:51.745 [2024-10-13 14:34:54.991988] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:38:51.745 [2024-10-13 14:34:54.992037] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:51.745 [2024-10-13 14:34:55.128572] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:51.745 [2024-10-13 14:34:55.176902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:51.745 [2024-10-13 14:34:55.196760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:51.745 [2024-10-13 14:34:55.196795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:51.745 [2024-10-13 14:34:55.196803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:51.745 [2024-10-13 14:34:55.196809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:51.745 [2024-10-13 14:34:55.196815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:51.745 [2024-10-13 14:34:55.198296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:51.745 [2024-10-13 14:34:55.198447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:51.745 [2024-10-13 14:34:55.198449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:52.318 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:52.318 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:38:52.318 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:52.318 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:52.318 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.318 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:52.318 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:52.318 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.318 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.318 [2024-10-13 14:34:55.858425] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:52.318 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.318 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.319 Malloc0 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:52.319 [2024-10-13 14:34:55.930482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:52.319 { 00:38:52.319 "params": { 00:38:52.319 "name": "Nvme$subsystem", 00:38:52.319 "trtype": "$TEST_TRANSPORT", 00:38:52.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:52.319 "adrfam": "ipv4", 00:38:52.319 "trsvcid": "$NVMF_PORT", 00:38:52.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:52.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:52.319 "hdgst": ${hdgst:-false}, 00:38:52.319 "ddgst": ${ddgst:-false} 00:38:52.319 }, 00:38:52.319 "method": "bdev_nvme_attach_controller" 00:38:52.319 } 00:38:52.319 EOF 00:38:52.319 )") 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:38:52.319 14:34:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:52.319 "params": { 00:38:52.319 "name": "Nvme1", 00:38:52.319 "trtype": "tcp", 00:38:52.319 "traddr": "10.0.0.2", 00:38:52.319 "adrfam": "ipv4", 00:38:52.319 "trsvcid": "4420", 00:38:52.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:52.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:52.319 "hdgst": false, 00:38:52.319 "ddgst": false 00:38:52.319 }, 00:38:52.319 "method": "bdev_nvme_attach_controller" 00:38:52.319 }' 00:38:52.319 [2024-10-13 14:34:55.988998] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:38:52.319 [2024-10-13 14:34:55.989080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1982512 ] 00:38:52.581 [2024-10-13 14:34:56.124045] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:52.581 [2024-10-13 14:34:56.174401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:52.581 [2024-10-13 14:34:56.202779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:52.842 Running I/O for 1 seconds... 00:38:54.235 8997.00 IOPS, 35.14 MiB/s 00:38:54.235 Latency(us) 00:38:54.235 [2024-10-13T12:34:57.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:54.235 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:54.235 Verification LBA range: start 0x0 length 0x4000 00:38:54.235 Nvme1n1 : 1.01 9043.44 35.33 0.00 0.00 14076.62 3079.18 11769.33 00:38:54.235 [2024-10-13T12:34:57.942Z] =================================================================================================================== 00:38:54.235 [2024-10-13T12:34:57.942Z] Total : 9043.44 35.33 0.00 0.00 14076.62 3079.18 11769.33 00:38:54.235 14:34:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1982848 00:38:54.235 14:34:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:38:54.235 14:34:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:38:54.235 14:34:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:38:54.235 14:34:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:38:54.235 14:34:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:38:54.235 14:34:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:54.235 14:34:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:54.235 { 00:38:54.235 "params": { 00:38:54.235 "name": "Nvme$subsystem", 00:38:54.235 "trtype": "$TEST_TRANSPORT", 00:38:54.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:54.235 "adrfam": "ipv4", 00:38:54.235 "trsvcid": "$NVMF_PORT", 00:38:54.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:54.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:54.235 "hdgst": ${hdgst:-false}, 00:38:54.235 "ddgst": ${ddgst:-false} 00:38:54.235 }, 00:38:54.235 "method": "bdev_nvme_attach_controller" 00:38:54.235 } 00:38:54.235 EOF 00:38:54.235 )") 00:38:54.235 14:34:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:38:54.235 14:34:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:38:54.235 14:34:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:38:54.235 14:34:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:54.235 "params": { 00:38:54.235 "name": "Nvme1", 00:38:54.235 "trtype": "tcp", 00:38:54.235 "traddr": "10.0.0.2", 00:38:54.235 "adrfam": "ipv4", 00:38:54.235 "trsvcid": "4420", 00:38:54.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:54.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:54.235 "hdgst": false, 00:38:54.235 "ddgst": false 00:38:54.235 }, 00:38:54.235 "method": "bdev_nvme_attach_controller" 00:38:54.235 }' 00:38:54.235 [2024-10-13 14:34:57.693140] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:38:54.235 [2024-10-13 14:34:57.693202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1982848 ] 00:38:54.235 [2024-10-13 14:34:57.825349] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:54.235 [2024-10-13 14:34:57.876918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.235 [2024-10-13 14:34:57.893787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:54.572 Running I/O for 15 seconds... 00:38:56.492 10125.00 IOPS, 39.55 MiB/s [2024-10-13T12:35:00.772Z] 10538.00 IOPS, 41.16 MiB/s [2024-10-13T12:35:00.772Z] 14:35:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1982341 00:38:57.065 14:35:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:38:57.065 [2024-10-13 14:35:00.658263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.065 [2024-10-13 14:35:00.658783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.065 [2024-10-13 14:35:00.658791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.658801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.658808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.658818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.658825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.658834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.658841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.658850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.658857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.658867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.658874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.658883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.658891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.658900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.658907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.658916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.658924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.658933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.658941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.658950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.658958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.658967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.658974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.658984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.658992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.066 [2024-10-13 14:35:00.659501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.066 [2024-10-13 14:35:00.659510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.659983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.659993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.660001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.660010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.660018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.660027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.067 [2024-10-13 14:35:00.660034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.660044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.067 [2024-10-13 14:35:00.660051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.660065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.067 [2024-10-13 14:35:00.660073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.660082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.067 [2024-10-13 14:35:00.660089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.660099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.067 [2024-10-13 14:35:00.660106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.660117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.067 [2024-10-13 14:35:00.660124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.660134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.067 [2024-10-13 14:35:00.660141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.660150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.067 [2024-10-13 14:35:00.660158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.660167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.067 [2024-10-13 14:35:00.660175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.660184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.067 [2024-10-13 14:35:00.660191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.660200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.067 [2024-10-13 14:35:00.660207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.067 [2024-10-13 14:35:00.660217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.068 [2024-10-13 14:35:00.660224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.068 [2024-10-13 14:35:00.660234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.068 [2024-10-13 14:35:00.660241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.068 [2024-10-13 14:35:00.660250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.068 [2024-10-13 14:35:00.660257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.068 [2024-10-13 14:35:00.660267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.068 [2024-10-13 14:35:00.660276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.068 [2024-10-13 14:35:00.660286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.068 [2024-10-13 14:35:00.660293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.068 [2024-10-13 14:35:00.660302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.068 [2024-10-13 14:35:00.660309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.068 [2024-10-13 14:35:00.660319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.068 [2024-10-13 14:35:00.660327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.068 [2024-10-13 14:35:00.660336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.068 [2024-10-13 14:35:00.660343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.068 [2024-10-13 14:35:00.660352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.068 [2024-10-13 14:35:00.660359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.068 [2024-10-13 14:35:00.660369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.068 [2024-10-13 14:35:00.660376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.068 [2024-10-13 14:35:00.660385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.068 [2024-10-13 14:35:00.660393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.068 [2024-10-13 14:35:00.660402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.068 [2024-10-13 14:35:00.660409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.068 [2024-10-13 14:35:00.660418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.068 [2024-10-13 14:35:00.660426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.068 [2024-10-13 14:35:00.660435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:57.068 [2024-10-13 14:35:00.660442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.068 [2024-10-13 14:35:00.660452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.068 [2024-10-13 14:35:00.660459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.068 [2024-10-13 14:35:00.660468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.068 [2024-10-13 14:35:00.660475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.068 [2024-10-13 14:35:00.660487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:57.068 [2024-10-13 14:35:00.660494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.068 [2024-10-13 14:35:00.660503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18419d0 is same with the state(6) to be set 00:38:57.068 [2024-10-13 14:35:00.660511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:57.068 [2024-10-13 14:35:00.660517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:57.068 [2024-10-13 14:35:00.660524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92648 len:8 PRP1 0x0 PRP2 0x0 00:38:57.068 [2024-10-13 14:35:00.660533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:57.068 [2024-10-13 14:35:00.660568] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18419d0 was disconnected and freed. reset controller. 00:38:57.068 [2024-10-13 14:35:00.664078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.068 [2024-10-13 14:35:00.664127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.068 [2024-10-13 14:35:00.664886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.068 [2024-10-13 14:35:00.664903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.068 [2024-10-13 14:35:00.664911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.068 [2024-10-13 14:35:00.665136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.068 [2024-10-13 14:35:00.665357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.068 [2024-10-13 14:35:00.665365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.068 [2024-10-13 14:35:00.665374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.068 [2024-10-13 14:35:00.668922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.068 [2024-10-13 14:35:00.678134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.068 [2024-10-13 14:35:00.678749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.068 [2024-10-13 14:35:00.678788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.068 [2024-10-13 14:35:00.678799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.068 [2024-10-13 14:35:00.679040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.068 [2024-10-13 14:35:00.679274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.068 [2024-10-13 14:35:00.679284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.068 [2024-10-13 14:35:00.679292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.068 [2024-10-13 14:35:00.682846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.068 [2024-10-13 14:35:00.692049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.068 [2024-10-13 14:35:00.692599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.068 [2024-10-13 14:35:00.692639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.068 [2024-10-13 14:35:00.692654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.068 [2024-10-13 14:35:00.692895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.068 [2024-10-13 14:35:00.693128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.068 [2024-10-13 14:35:00.693138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.068 [2024-10-13 14:35:00.693147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.068 [2024-10-13 14:35:00.696700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.068 [2024-10-13 14:35:00.705903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.068 [2024-10-13 14:35:00.706593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.068 [2024-10-13 14:35:00.706633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.068 [2024-10-13 14:35:00.706645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.068 [2024-10-13 14:35:00.706885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.068 [2024-10-13 14:35:00.707118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.068 [2024-10-13 14:35:00.707128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.068 [2024-10-13 14:35:00.707135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.068 [2024-10-13 14:35:00.710693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.068 [2024-10-13 14:35:00.719759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.068 [2024-10-13 14:35:00.720298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.068 [2024-10-13 14:35:00.720338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.068 [2024-10-13 14:35:00.720351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.068 [2024-10-13 14:35:00.720594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.068 [2024-10-13 14:35:00.720817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.068 [2024-10-13 14:35:00.720826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.068 [2024-10-13 14:35:00.720834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.068 [2024-10-13 14:35:00.724410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.068 [2024-10-13 14:35:00.733646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.068 [2024-10-13 14:35:00.734411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.068 [2024-10-13 14:35:00.734454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.068 [2024-10-13 14:35:00.734465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.068 [2024-10-13 14:35:00.734706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.068 [2024-10-13 14:35:00.734930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.068 [2024-10-13 14:35:00.734944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.068 [2024-10-13 14:35:00.734952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.069 [2024-10-13 14:35:00.738523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.069 [2024-10-13 14:35:00.747540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.069 [2024-10-13 14:35:00.748295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.069 [2024-10-13 14:35:00.748340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.069 [2024-10-13 14:35:00.748352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.069 [2024-10-13 14:35:00.748595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.069 [2024-10-13 14:35:00.748819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.069 [2024-10-13 14:35:00.748828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.069 [2024-10-13 14:35:00.748836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.069 [2024-10-13 14:35:00.752404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.069 [2024-10-13 14:35:00.761407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.069 [2024-10-13 14:35:00.762088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.069 [2024-10-13 14:35:00.762135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.069 [2024-10-13 14:35:00.762148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.069 [2024-10-13 14:35:00.762394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.069 [2024-10-13 14:35:00.762618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.069 [2024-10-13 14:35:00.762635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.069 [2024-10-13 14:35:00.762643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.069 [2024-10-13 14:35:00.766344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.331 [2024-10-13 14:35:00.775367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.331 [2024-10-13 14:35:00.776093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.331 [2024-10-13 14:35:00.776141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.331 [2024-10-13 14:35:00.776154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.331 [2024-10-13 14:35:00.776401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.331 [2024-10-13 14:35:00.776624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.331 [2024-10-13 14:35:00.776639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.331 [2024-10-13 14:35:00.776647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.331 [2024-10-13 14:35:00.780218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.331 [2024-10-13 14:35:00.789250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.331 [2024-10-13 14:35:00.789931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.331 [2024-10-13 14:35:00.789981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.331 [2024-10-13 14:35:00.789992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.331 [2024-10-13 14:35:00.790250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.331 [2024-10-13 14:35:00.790476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.331 [2024-10-13 14:35:00.790485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.331 [2024-10-13 14:35:00.790493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.331 [2024-10-13 14:35:00.794057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.331 [2024-10-13 14:35:00.803093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.331 [2024-10-13 14:35:00.803679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.331 [2024-10-13 14:35:00.803729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.331 [2024-10-13 14:35:00.803741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.331 [2024-10-13 14:35:00.803989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.331 [2024-10-13 14:35:00.804223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.331 [2024-10-13 14:35:00.804233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.331 [2024-10-13 14:35:00.804241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.331 [2024-10-13 14:35:00.807813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.331 [2024-10-13 14:35:00.817042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.331 [2024-10-13 14:35:00.817754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.331 [2024-10-13 14:35:00.817810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.331 [2024-10-13 14:35:00.817822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.331 [2024-10-13 14:35:00.818085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.331 [2024-10-13 14:35:00.818311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.331 [2024-10-13 14:35:00.818320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.331 [2024-10-13 14:35:00.818328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.331 [2024-10-13 14:35:00.821894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.331 [2024-10-13 14:35:00.830923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.331 [2024-10-13 14:35:00.831578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.331 [2024-10-13 14:35:00.831638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.331 [2024-10-13 14:35:00.831651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.331 [2024-10-13 14:35:00.831910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.331 [2024-10-13 14:35:00.832150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.331 [2024-10-13 14:35:00.832160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.331 [2024-10-13 14:35:00.832168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.331 [2024-10-13 14:35:00.835740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.331 [2024-10-13 14:35:00.844756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.331 [2024-10-13 14:35:00.845492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.331 [2024-10-13 14:35:00.845556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.331 [2024-10-13 14:35:00.845569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.331 [2024-10-13 14:35:00.845824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.331 [2024-10-13 14:35:00.846081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.331 [2024-10-13 14:35:00.846093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.331 [2024-10-13 14:35:00.846102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.331 [2024-10-13 14:35:00.849674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.331 [2024-10-13 14:35:00.858688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.331 [2024-10-13 14:35:00.859376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.331 [2024-10-13 14:35:00.859440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.331 [2024-10-13 14:35:00.859452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.331 [2024-10-13 14:35:00.859707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.331 [2024-10-13 14:35:00.859935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.331 [2024-10-13 14:35:00.859944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.331 [2024-10-13 14:35:00.859953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.331 [2024-10-13 14:35:00.863538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.331 [2024-10-13 14:35:00.872550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.331 [2024-10-13 14:35:00.873136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.331 [2024-10-13 14:35:00.873166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.331 [2024-10-13 14:35:00.873175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.331 [2024-10-13 14:35:00.873399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.331 [2024-10-13 14:35:00.873620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.332 [2024-10-13 14:35:00.873630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.332 [2024-10-13 14:35:00.873646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.332 [2024-10-13 14:35:00.877218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.332 [2024-10-13 14:35:00.886430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.332 [2024-10-13 14:35:00.886998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.332 [2024-10-13 14:35:00.887075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.332 [2024-10-13 14:35:00.887089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.332 [2024-10-13 14:35:00.887344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.332 [2024-10-13 14:35:00.887570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.332 [2024-10-13 14:35:00.887580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.332 [2024-10-13 14:35:00.887588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.332 [2024-10-13 14:35:00.891167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.332 [2024-10-13 14:35:00.900399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.332 [2024-10-13 14:35:00.901146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.332 [2024-10-13 14:35:00.901211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.332 [2024-10-13 14:35:00.901225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.332 [2024-10-13 14:35:00.901482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.332 [2024-10-13 14:35:00.901709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.332 [2024-10-13 14:35:00.901719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.332 [2024-10-13 14:35:00.901727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.332 [2024-10-13 14:35:00.905394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.332 [2024-10-13 14:35:00.914204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.332 [2024-10-13 14:35:00.914918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.332 [2024-10-13 14:35:00.914981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.332 [2024-10-13 14:35:00.914994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.332 [2024-10-13 14:35:00.915265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.332 [2024-10-13 14:35:00.915494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.332 [2024-10-13 14:35:00.915505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.332 [2024-10-13 14:35:00.915514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.332 [2024-10-13 14:35:00.919100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.332 [2024-10-13 14:35:00.928187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.332 [2024-10-13 14:35:00.928932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.332 [2024-10-13 14:35:00.928997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.332 [2024-10-13 14:35:00.929011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.332 [2024-10-13 14:35:00.929281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.332 [2024-10-13 14:35:00.929509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.332 [2024-10-13 14:35:00.929519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.332 [2024-10-13 14:35:00.929527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.332 [2024-10-13 14:35:00.933107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.332 [2024-10-13 14:35:00.942159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.332 [2024-10-13 14:35:00.942727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.332 [2024-10-13 14:35:00.942756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.332 [2024-10-13 14:35:00.942767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.332 [2024-10-13 14:35:00.942991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.332 [2024-10-13 14:35:00.943225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.332 [2024-10-13 14:35:00.943235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.332 [2024-10-13 14:35:00.943243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.332 [2024-10-13 14:35:00.946823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.332 [2024-10-13 14:35:00.956054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.332 [2024-10-13 14:35:00.956631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.332 [2024-10-13 14:35:00.956695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.332 [2024-10-13 14:35:00.956707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.332 [2024-10-13 14:35:00.956962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.332 [2024-10-13 14:35:00.957200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.332 [2024-10-13 14:35:00.957210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.332 [2024-10-13 14:35:00.957219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.332 [2024-10-13 14:35:00.960793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.332 [2024-10-13 14:35:00.970015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.332 [2024-10-13 14:35:00.970721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.332 [2024-10-13 14:35:00.970785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.332 [2024-10-13 14:35:00.970798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.332 [2024-10-13 14:35:00.971053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.332 [2024-10-13 14:35:00.971298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.332 [2024-10-13 14:35:00.971308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.332 [2024-10-13 14:35:00.971317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.332 [2024-10-13 14:35:00.974898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.332 [2024-10-13 14:35:00.983929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.332 [2024-10-13 14:35:00.984601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.332 [2024-10-13 14:35:00.984664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.332 [2024-10-13 14:35:00.984677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.332 [2024-10-13 14:35:00.984933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.332 [2024-10-13 14:35:00.985177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.332 [2024-10-13 14:35:00.985187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.332 [2024-10-13 14:35:00.985195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.332 [2024-10-13 14:35:00.988771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.332 [2024-10-13 14:35:00.997791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.332 [2024-10-13 14:35:00.998479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.332 [2024-10-13 14:35:00.998544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.332 [2024-10-13 14:35:00.998557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.332 [2024-10-13 14:35:00.998812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.332 [2024-10-13 14:35:00.999039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.332 [2024-10-13 14:35:00.999048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.332 [2024-10-13 14:35:00.999056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.332 [2024-10-13 14:35:01.002649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.332 [2024-10-13 14:35:01.011668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.332 [2024-10-13 14:35:01.012293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.332 [2024-10-13 14:35:01.012322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.332 [2024-10-13 14:35:01.012331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.332 [2024-10-13 14:35:01.012555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.332 [2024-10-13 14:35:01.012778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.332 [2024-10-13 14:35:01.012789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.332 [2024-10-13 14:35:01.012796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.332 [2024-10-13 14:35:01.016382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.332 [2024-10-13 14:35:01.025647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.332 [2024-10-13 14:35:01.026314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.332 [2024-10-13 14:35:01.026377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.332 [2024-10-13 14:35:01.026390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.333 [2024-10-13 14:35:01.026645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.333 [2024-10-13 14:35:01.026872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.333 [2024-10-13 14:35:01.026881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.333 [2024-10-13 14:35:01.026889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.333 [2024-10-13 14:35:01.030472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.594 [2024-10-13 14:35:01.039509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.594 [2024-10-13 14:35:01.040174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.594 [2024-10-13 14:35:01.040222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.594 [2024-10-13 14:35:01.040232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.594 [2024-10-13 14:35:01.040475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.594 [2024-10-13 14:35:01.040700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.594 [2024-10-13 14:35:01.040710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.594 [2024-10-13 14:35:01.040718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.594 9313.00 IOPS, 36.38 MiB/s [2024-10-13T12:35:01.301Z] [2024-10-13 14:35:01.045948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.594 [2024-10-13 14:35:01.053314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.594 [2024-10-13 14:35:01.054021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.594 [2024-10-13 14:35:01.054099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.594 [2024-10-13 14:35:01.054113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.594 [2024-10-13 14:35:01.054369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.594 [2024-10-13 14:35:01.054597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.594 [2024-10-13 14:35:01.054606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.594 [2024-10-13 14:35:01.054615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.594 [2024-10-13 14:35:01.058197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.594 [2024-10-13 14:35:01.067252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.594 [2024-10-13 14:35:01.067854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.594 [2024-10-13 14:35:01.067890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.594 [2024-10-13 14:35:01.067899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.595 [2024-10-13 14:35:01.068134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.595 [2024-10-13 14:35:01.068357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.595 [2024-10-13 14:35:01.068367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.595 [2024-10-13 14:35:01.068375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.595 [2024-10-13 14:35:01.071945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.595 [2024-10-13 14:35:01.081201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.595 [2024-10-13 14:35:01.081771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.595 [2024-10-13 14:35:01.081795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.595 [2024-10-13 14:35:01.081804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.595 [2024-10-13 14:35:01.082028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.595 [2024-10-13 14:35:01.082261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.595 [2024-10-13 14:35:01.082270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.595 [2024-10-13 14:35:01.082279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.595 [2024-10-13 14:35:01.085847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.595 [2024-10-13 14:35:01.095095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.595 [2024-10-13 14:35:01.095661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.595 [2024-10-13 14:35:01.095683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.595 [2024-10-13 14:35:01.095691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.595 [2024-10-13 14:35:01.095912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.595 [2024-10-13 14:35:01.096143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.595 [2024-10-13 14:35:01.096152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.595 [2024-10-13 14:35:01.096163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.595 [2024-10-13 14:35:01.099734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.595 [2024-10-13 14:35:01.108980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.595 [2024-10-13 14:35:01.109655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.595 [2024-10-13 14:35:01.109719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.595 [2024-10-13 14:35:01.109732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.595 [2024-10-13 14:35:01.109988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.595 [2024-10-13 14:35:01.110238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.595 [2024-10-13 14:35:01.110249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.595 [2024-10-13 14:35:01.110257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.595 [2024-10-13 14:35:01.113832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.595 [2024-10-13 14:35:01.122858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.595 [2024-10-13 14:35:01.123465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.595 [2024-10-13 14:35:01.123529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.595 [2024-10-13 14:35:01.123542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.595 [2024-10-13 14:35:01.123797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.595 [2024-10-13 14:35:01.124023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.595 [2024-10-13 14:35:01.124033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.595 [2024-10-13 14:35:01.124041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.595 [2024-10-13 14:35:01.127655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.595 [2024-10-13 14:35:01.136686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.595 [2024-10-13 14:35:01.137426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.595 [2024-10-13 14:35:01.137490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.595 [2024-10-13 14:35:01.137503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.595 [2024-10-13 14:35:01.137758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.595 [2024-10-13 14:35:01.137984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.595 [2024-10-13 14:35:01.137994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.595 [2024-10-13 14:35:01.138002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.595 [2024-10-13 14:35:01.141593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.595 [2024-10-13 14:35:01.150628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.595 [2024-10-13 14:35:01.151415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.595 [2024-10-13 14:35:01.151479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.595 [2024-10-13 14:35:01.151492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.595 [2024-10-13 14:35:01.151747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.595 [2024-10-13 14:35:01.151973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.595 [2024-10-13 14:35:01.151982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.595 [2024-10-13 14:35:01.151991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.595 [2024-10-13 14:35:01.155594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.595 [2024-10-13 14:35:01.164616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.595 [2024-10-13 14:35:01.165218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.595 [2024-10-13 14:35:01.165282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.595 [2024-10-13 14:35:01.165297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.595 [2024-10-13 14:35:01.165554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.595 [2024-10-13 14:35:01.165781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.595 [2024-10-13 14:35:01.165792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.595 [2024-10-13 14:35:01.165800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.595 [2024-10-13 14:35:01.169395] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.595 [2024-10-13 14:35:01.178416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.595 [2024-10-13 14:35:01.179006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.595 [2024-10-13 14:35:01.179034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.595 [2024-10-13 14:35:01.179043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.595 [2024-10-13 14:35:01.179275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.595 [2024-10-13 14:35:01.179498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.595 [2024-10-13 14:35:01.179507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.595 [2024-10-13 14:35:01.179515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.595 [2024-10-13 14:35:01.183079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.595 [2024-10-13 14:35:01.192304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.595 [2024-10-13 14:35:01.192873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.595 [2024-10-13 14:35:01.192896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.595 [2024-10-13 14:35:01.192904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.595 [2024-10-13 14:35:01.193134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.595 [2024-10-13 14:35:01.193357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.595 [2024-10-13 14:35:01.193366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.595 [2024-10-13 14:35:01.193373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.595 [2024-10-13 14:35:01.196932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.595 [2024-10-13 14:35:01.206144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.595 [2024-10-13 14:35:01.206721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.595 [2024-10-13 14:35:01.206738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.595 [2024-10-13 14:35:01.206751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.595 [2024-10-13 14:35:01.206971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.595 [2024-10-13 14:35:01.207198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.595 [2024-10-13 14:35:01.207207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.595 [2024-10-13 14:35:01.207214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.595 [2024-10-13 14:35:01.210762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.595 [2024-10-13 14:35:01.219967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.595 [2024-10-13 14:35:01.220618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.596 [2024-10-13 14:35:01.220663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.596 [2024-10-13 14:35:01.220675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.596 [2024-10-13 14:35:01.220917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.596 [2024-10-13 14:35:01.221151] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.596 [2024-10-13 14:35:01.221161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.596 [2024-10-13 14:35:01.221169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.596 [2024-10-13 14:35:01.224730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.596 [2024-10-13 14:35:01.233755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.596 [2024-10-13 14:35:01.234421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.596 [2024-10-13 14:35:01.234463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.596 [2024-10-13 14:35:01.234474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.596 [2024-10-13 14:35:01.234715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.596 [2024-10-13 14:35:01.234938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.596 [2024-10-13 14:35:01.234947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.596 [2024-10-13 14:35:01.234955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.596 [2024-10-13 14:35:01.238522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.596 [2024-10-13 14:35:01.247745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.596 [2024-10-13 14:35:01.248418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.596 [2024-10-13 14:35:01.248460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.596 [2024-10-13 14:35:01.248471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.596 [2024-10-13 14:35:01.248712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.596 [2024-10-13 14:35:01.248935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.596 [2024-10-13 14:35:01.248948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.596 [2024-10-13 14:35:01.248956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.596 [2024-10-13 14:35:01.252520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.596 [2024-10-13 14:35:01.261515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.596 [2024-10-13 14:35:01.262169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.596 [2024-10-13 14:35:01.262210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.596 [2024-10-13 14:35:01.262222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.596 [2024-10-13 14:35:01.262465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.596 [2024-10-13 14:35:01.262688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.596 [2024-10-13 14:35:01.262696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.596 [2024-10-13 14:35:01.262704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.596 [2024-10-13 14:35:01.266268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.596 [2024-10-13 14:35:01.275474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.596 [2024-10-13 14:35:01.276141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.596 [2024-10-13 14:35:01.276181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.596 [2024-10-13 14:35:01.276193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.596 [2024-10-13 14:35:01.276435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.596 [2024-10-13 14:35:01.276658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.596 [2024-10-13 14:35:01.276667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.596 [2024-10-13 14:35:01.276675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.596 [2024-10-13 14:35:01.280236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.596 [2024-10-13 14:35:01.289442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.596 [2024-10-13 14:35:01.290096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.596 [2024-10-13 14:35:01.290136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.596 [2024-10-13 14:35:01.290148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.596 [2024-10-13 14:35:01.290388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.596 [2024-10-13 14:35:01.290610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.596 [2024-10-13 14:35:01.290619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.596 [2024-10-13 14:35:01.290627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.596 [2024-10-13 14:35:01.294190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.858 [2024-10-13 14:35:01.303404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.858 [2024-10-13 14:35:01.303955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.858 [2024-10-13 14:35:01.303975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.858 [2024-10-13 14:35:01.303982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.858 [2024-10-13 14:35:01.304210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.858 [2024-10-13 14:35:01.304430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.858 [2024-10-13 14:35:01.304439] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.858 [2024-10-13 14:35:01.304446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.858 [2024-10-13 14:35:01.307990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.858 [2024-10-13 14:35:01.317189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.858 [2024-10-13 14:35:01.317861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.858 [2024-10-13 14:35:01.317900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.858 [2024-10-13 14:35:01.317911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.858 [2024-10-13 14:35:01.318159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.858 [2024-10-13 14:35:01.318383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.858 [2024-10-13 14:35:01.318392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.858 [2024-10-13 14:35:01.318399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.858 [2024-10-13 14:35:01.321950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.858 [2024-10-13 14:35:01.331164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.858 [2024-10-13 14:35:01.331832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.858 [2024-10-13 14:35:01.331871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.858 [2024-10-13 14:35:01.331882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.858 [2024-10-13 14:35:01.332129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.858 [2024-10-13 14:35:01.332353] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.858 [2024-10-13 14:35:01.332362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.858 [2024-10-13 14:35:01.332369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.858 [2024-10-13 14:35:01.335921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.858 [2024-10-13 14:35:01.345131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.858 [2024-10-13 14:35:01.345698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.858 [2024-10-13 14:35:01.345737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.858 [2024-10-13 14:35:01.345748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.858 [2024-10-13 14:35:01.345992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.858 [2024-10-13 14:35:01.346224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.858 [2024-10-13 14:35:01.346234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.858 [2024-10-13 14:35:01.346242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.858 [2024-10-13 14:35:01.349802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.858 [2024-10-13 14:35:01.359014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.858 [2024-10-13 14:35:01.359558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.858 [2024-10-13 14:35:01.359578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.858 [2024-10-13 14:35:01.359586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.858 [2024-10-13 14:35:01.359806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.858 [2024-10-13 14:35:01.360025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.858 [2024-10-13 14:35:01.360033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.858 [2024-10-13 14:35:01.360040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.858 [2024-10-13 14:35:01.363590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.858 [2024-10-13 14:35:01.372996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.858 [2024-10-13 14:35:01.373581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.858 [2024-10-13 14:35:01.373599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.858 [2024-10-13 14:35:01.373607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.858 [2024-10-13 14:35:01.373826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.858 [2024-10-13 14:35:01.374045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.858 [2024-10-13 14:35:01.374053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.858 [2024-10-13 14:35:01.374060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.858 [2024-10-13 14:35:01.377612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.858 [2024-10-13 14:35:01.386812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.858 [2024-10-13 14:35:01.387347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.858 [2024-10-13 14:35:01.387364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.858 [2024-10-13 14:35:01.387371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.858 [2024-10-13 14:35:01.387591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.858 [2024-10-13 14:35:01.387809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.858 [2024-10-13 14:35:01.387818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.858 [2024-10-13 14:35:01.387829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.858 [2024-10-13 14:35:01.391380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.858 [2024-10-13 14:35:01.400581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.858 [2024-10-13 14:35:01.401124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.858 [2024-10-13 14:35:01.401149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.858 [2024-10-13 14:35:01.401157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.858 [2024-10-13 14:35:01.401380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.858 [2024-10-13 14:35:01.401601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.858 [2024-10-13 14:35:01.401609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.858 [2024-10-13 14:35:01.401616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.858 [2024-10-13 14:35:01.405170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.858 [2024-10-13 14:35:01.414365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.858 [2024-10-13 14:35:01.414990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.858 [2024-10-13 14:35:01.415029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.858 [2024-10-13 14:35:01.415041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.858 [2024-10-13 14:35:01.415291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.859 [2024-10-13 14:35:01.415514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.859 [2024-10-13 14:35:01.415525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.859 [2024-10-13 14:35:01.415532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.859 [2024-10-13 14:35:01.419089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.859 [2024-10-13 14:35:01.428305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.859 [2024-10-13 14:35:01.428831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.859 [2024-10-13 14:35:01.428870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.859 [2024-10-13 14:35:01.428881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.859 [2024-10-13 14:35:01.429128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.859 [2024-10-13 14:35:01.429352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.859 [2024-10-13 14:35:01.429361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.859 [2024-10-13 14:35:01.429369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.859 [2024-10-13 14:35:01.432920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.859 [2024-10-13 14:35:01.442130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.859 [2024-10-13 14:35:01.442794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.859 [2024-10-13 14:35:01.442833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.859 [2024-10-13 14:35:01.442845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.859 [2024-10-13 14:35:01.443092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.859 [2024-10-13 14:35:01.443316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.859 [2024-10-13 14:35:01.443325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.859 [2024-10-13 14:35:01.443333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.859 [2024-10-13 14:35:01.446895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.859 [2024-10-13 14:35:01.456113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.859 [2024-10-13 14:35:01.456742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.859 [2024-10-13 14:35:01.456781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.859 [2024-10-13 14:35:01.456792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.859 [2024-10-13 14:35:01.457031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.859 [2024-10-13 14:35:01.457262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.859 [2024-10-13 14:35:01.457271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.859 [2024-10-13 14:35:01.457279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.859 [2024-10-13 14:35:01.460831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.859 [2024-10-13 14:35:01.470040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.859 [2024-10-13 14:35:01.470721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.859 [2024-10-13 14:35:01.470760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.859 [2024-10-13 14:35:01.470771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.859 [2024-10-13 14:35:01.471010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.859 [2024-10-13 14:35:01.471241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.859 [2024-10-13 14:35:01.471251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.859 [2024-10-13 14:35:01.471258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.859 [2024-10-13 14:35:01.474810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.859 [2024-10-13 14:35:01.484015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.859 [2024-10-13 14:35:01.484498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.859 [2024-10-13 14:35:01.484537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.859 [2024-10-13 14:35:01.484549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.859 [2024-10-13 14:35:01.484794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.859 [2024-10-13 14:35:01.485017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.859 [2024-10-13 14:35:01.485026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.859 [2024-10-13 14:35:01.485034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.859 [2024-10-13 14:35:01.488596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.859 [2024-10-13 14:35:01.497799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.859 [2024-10-13 14:35:01.498469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.859 [2024-10-13 14:35:01.498508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.859 [2024-10-13 14:35:01.498519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.859 [2024-10-13 14:35:01.498758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.859 [2024-10-13 14:35:01.498981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.859 [2024-10-13 14:35:01.498989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.859 [2024-10-13 14:35:01.498997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.859 [2024-10-13 14:35:01.502557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.859 [2024-10-13 14:35:01.511762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.859 [2024-10-13 14:35:01.512440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.859 [2024-10-13 14:35:01.512480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.859 [2024-10-13 14:35:01.512490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.859 [2024-10-13 14:35:01.512730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.859 [2024-10-13 14:35:01.512952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.859 [2024-10-13 14:35:01.512961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.859 [2024-10-13 14:35:01.512969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.859 [2024-10-13 14:35:01.516529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.859 [2024-10-13 14:35:01.525744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.859 [2024-10-13 14:35:01.526385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.859 [2024-10-13 14:35:01.526424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.859 [2024-10-13 14:35:01.526437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.859 [2024-10-13 14:35:01.526677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.859 [2024-10-13 14:35:01.526900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.859 [2024-10-13 14:35:01.526909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.859 [2024-10-13 14:35:01.526921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.859 [2024-10-13 14:35:01.530484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.859 [2024-10-13 14:35:01.539686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.859 [2024-10-13 14:35:01.540339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.859 [2024-10-13 14:35:01.540379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.859 [2024-10-13 14:35:01.540390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.859 [2024-10-13 14:35:01.540628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.859 [2024-10-13 14:35:01.540851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.859 [2024-10-13 14:35:01.540860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.859 [2024-10-13 14:35:01.540867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.859 [2024-10-13 14:35:01.544427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:57.859 [2024-10-13 14:35:01.553647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:57.859 [2024-10-13 14:35:01.554317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:57.859 [2024-10-13 14:35:01.554356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:57.859 [2024-10-13 14:35:01.554367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:57.859 [2024-10-13 14:35:01.554606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:57.859 [2024-10-13 14:35:01.554828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:57.859 [2024-10-13 14:35:01.554837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:57.859 [2024-10-13 14:35:01.554845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:57.859 [2024-10-13 14:35:01.558406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.121 [2024-10-13 14:35:01.567607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.121 [2024-10-13 14:35:01.568154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.121 [2024-10-13 14:35:01.568174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.121 [2024-10-13 14:35:01.568183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.121 [2024-10-13 14:35:01.568403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.121 [2024-10-13 14:35:01.568622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.121 [2024-10-13 14:35:01.568630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.121 [2024-10-13 14:35:01.568637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.121 [2024-10-13 14:35:01.572191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.121 [2024-10-13 14:35:01.581392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.121 [2024-10-13 14:35:01.581906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.121 [2024-10-13 14:35:01.581949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.121 [2024-10-13 14:35:01.581961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.121 [2024-10-13 14:35:01.582209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.121 [2024-10-13 14:35:01.582434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.121 [2024-10-13 14:35:01.582442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.121 [2024-10-13 14:35:01.582450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.121 [2024-10-13 14:35:01.586002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.121 [2024-10-13 14:35:01.595207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.121 [2024-10-13 14:35:01.595721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.121 [2024-10-13 14:35:01.595760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.121 [2024-10-13 14:35:01.595771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.121 [2024-10-13 14:35:01.596010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.121 [2024-10-13 14:35:01.596243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.121 [2024-10-13 14:35:01.596253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.122 [2024-10-13 14:35:01.596260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.122 [2024-10-13 14:35:01.599812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.122 [2024-10-13 14:35:01.609014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.122 [2024-10-13 14:35:01.609599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.122 [2024-10-13 14:35:01.609619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.122 [2024-10-13 14:35:01.609627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.122 [2024-10-13 14:35:01.609847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.122 [2024-10-13 14:35:01.610072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.122 [2024-10-13 14:35:01.610081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.122 [2024-10-13 14:35:01.610088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.122 [2024-10-13 14:35:01.613633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.122 [2024-10-13 14:35:01.622829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.122 [2024-10-13 14:35:01.623478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.122 [2024-10-13 14:35:01.623517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.122 [2024-10-13 14:35:01.623529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.122 [2024-10-13 14:35:01.623769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.122 [2024-10-13 14:35:01.624001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.122 [2024-10-13 14:35:01.624010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.122 [2024-10-13 14:35:01.624018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.122 [2024-10-13 14:35:01.627588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.122 [2024-10-13 14:35:01.636795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.122 [2024-10-13 14:35:01.637435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.122 [2024-10-13 14:35:01.637475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.122 [2024-10-13 14:35:01.637486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.122 [2024-10-13 14:35:01.637725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.122 [2024-10-13 14:35:01.637948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.122 [2024-10-13 14:35:01.637956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.122 [2024-10-13 14:35:01.637964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.122 [2024-10-13 14:35:01.640679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.122 [2024-10-13 14:35:01.649442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.122 [2024-10-13 14:35:01.650012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.122 [2024-10-13 14:35:01.650044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.122 [2024-10-13 14:35:01.650052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.122 [2024-10-13 14:35:01.650228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.122 [2024-10-13 14:35:01.650382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.122 [2024-10-13 14:35:01.650389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.122 [2024-10-13 14:35:01.650394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.122 [2024-10-13 14:35:01.652836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.122 [2024-10-13 14:35:01.662163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.122 [2024-10-13 14:35:01.662730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.122 [2024-10-13 14:35:01.662762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.122 [2024-10-13 14:35:01.662771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.122 [2024-10-13 14:35:01.662938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.122 [2024-10-13 14:35:01.663098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.122 [2024-10-13 14:35:01.663106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.122 [2024-10-13 14:35:01.663111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.122 [2024-10-13 14:35:01.665553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.122 [2024-10-13 14:35:01.674877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.122 [2024-10-13 14:35:01.675382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.122 [2024-10-13 14:35:01.675397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.122 [2024-10-13 14:35:01.675403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.122 [2024-10-13 14:35:01.675554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.122 [2024-10-13 14:35:01.675705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.122 [2024-10-13 14:35:01.675711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.122 [2024-10-13 14:35:01.675716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.122 [2024-10-13 14:35:01.678154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.122 [2024-10-13 14:35:01.687475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.122 [2024-10-13 14:35:01.688030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.122 [2024-10-13 14:35:01.688061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.122 [2024-10-13 14:35:01.688075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.122 [2024-10-13 14:35:01.688242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.122 [2024-10-13 14:35:01.688395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.122 [2024-10-13 14:35:01.688401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.122 [2024-10-13 14:35:01.688407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.122 [2024-10-13 14:35:01.690845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.122 [2024-10-13 14:35:01.700177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.122 [2024-10-13 14:35:01.700680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.122 [2024-10-13 14:35:01.700695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.122 [2024-10-13 14:35:01.700701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.122 [2024-10-13 14:35:01.700852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.122 [2024-10-13 14:35:01.701003] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.122 [2024-10-13 14:35:01.701009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.122 [2024-10-13 14:35:01.701013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.122 [2024-10-13 14:35:01.703453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.122 [2024-10-13 14:35:01.712777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.122 [2024-10-13 14:35:01.713358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.122 [2024-10-13 14:35:01.713390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.122 [2024-10-13 14:35:01.713401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.122 [2024-10-13 14:35:01.713568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.122 [2024-10-13 14:35:01.713721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.122 [2024-10-13 14:35:01.713728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.122 [2024-10-13 14:35:01.713733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.122 [2024-10-13 14:35:01.716176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.122 [2024-10-13 14:35:01.725509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.122 [2024-10-13 14:35:01.725854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.122 [2024-10-13 14:35:01.725868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.122 [2024-10-13 14:35:01.725874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.122 [2024-10-13 14:35:01.726025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.122 [2024-10-13 14:35:01.726182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.122 [2024-10-13 14:35:01.726188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.122 [2024-10-13 14:35:01.726193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.122 [2024-10-13 14:35:01.728628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.122 [2024-10-13 14:35:01.738173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.122 [2024-10-13 14:35:01.738744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.122 [2024-10-13 14:35:01.738774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.122 [2024-10-13 14:35:01.738783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.122 [2024-10-13 14:35:01.738949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.122 [2024-10-13 14:35:01.739109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.123 [2024-10-13 14:35:01.739116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.123 [2024-10-13 14:35:01.739121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.123 [2024-10-13 14:35:01.741558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.123 [2024-10-13 14:35:01.750907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.123 [2024-10-13 14:35:01.751391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.123 [2024-10-13 14:35:01.751422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.123 [2024-10-13 14:35:01.751431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.123 [2024-10-13 14:35:01.751600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.123 [2024-10-13 14:35:01.751753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.123 [2024-10-13 14:35:01.751763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.123 [2024-10-13 14:35:01.751769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.123 [2024-10-13 14:35:01.754214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.123 [2024-10-13 14:35:01.763552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.123 [2024-10-13 14:35:01.764137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.123 [2024-10-13 14:35:01.764168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.123 [2024-10-13 14:35:01.764177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.123 [2024-10-13 14:35:01.764346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.123 [2024-10-13 14:35:01.764499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.123 [2024-10-13 14:35:01.764505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.123 [2024-10-13 14:35:01.764510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.123 [2024-10-13 14:35:01.766955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.123 [2024-10-13 14:35:01.776143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.123 [2024-10-13 14:35:01.776715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.123 [2024-10-13 14:35:01.776746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.123 [2024-10-13 14:35:01.776754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.123 [2024-10-13 14:35:01.776920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.123 [2024-10-13 14:35:01.777079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.123 [2024-10-13 14:35:01.777086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.123 [2024-10-13 14:35:01.777091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.123 [2024-10-13 14:35:01.779529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.123 [2024-10-13 14:35:01.788854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.123 [2024-10-13 14:35:01.789319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.123 [2024-10-13 14:35:01.789335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.123 [2024-10-13 14:35:01.789340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.123 [2024-10-13 14:35:01.789492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.123 [2024-10-13 14:35:01.789644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.123 [2024-10-13 14:35:01.789650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.123 [2024-10-13 14:35:01.789654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.123 [2024-10-13 14:35:01.792092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.123 [2024-10-13 14:35:01.801539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.123 [2024-10-13 14:35:01.802032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.123 [2024-10-13 14:35:01.802046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.123 [2024-10-13 14:35:01.802052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.123 [2024-10-13 14:35:01.802208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.123 [2024-10-13 14:35:01.802359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.123 [2024-10-13 14:35:01.802366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.123 [2024-10-13 14:35:01.802371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.123 [2024-10-13 14:35:01.804802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.123 [2024-10-13 14:35:01.814272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.123 [2024-10-13 14:35:01.814720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.123 [2024-10-13 14:35:01.814732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.123 [2024-10-13 14:35:01.814737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.123 [2024-10-13 14:35:01.814888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.123 [2024-10-13 14:35:01.815038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.123 [2024-10-13 14:35:01.815045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.123 [2024-10-13 14:35:01.815050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.123 [2024-10-13 14:35:01.817488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.385 [2024-10-13 14:35:01.826966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.385 [2024-10-13 14:35:01.827397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.385 [2024-10-13 14:35:01.827409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.385 [2024-10-13 14:35:01.827415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.385 [2024-10-13 14:35:01.827566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.385 [2024-10-13 14:35:01.827716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.385 [2024-10-13 14:35:01.827722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.385 [2024-10-13 14:35:01.827728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.385 [2024-10-13 14:35:01.830165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.385 [2024-10-13 14:35:01.839631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.385 [2024-10-13 14:35:01.840166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.385 [2024-10-13 14:35:01.840197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.385 [2024-10-13 14:35:01.840206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.385 [2024-10-13 14:35:01.840379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.385 [2024-10-13 14:35:01.840533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.385 [2024-10-13 14:35:01.840539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.385 [2024-10-13 14:35:01.840544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.385 [2024-10-13 14:35:01.842987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.385 [2024-10-13 14:35:01.852328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.385 [2024-10-13 14:35:01.852880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.385 [2024-10-13 14:35:01.852911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.385 [2024-10-13 14:35:01.852919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.385 [2024-10-13 14:35:01.853090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.385 [2024-10-13 14:35:01.853244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.385 [2024-10-13 14:35:01.853251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.385 [2024-10-13 14:35:01.853256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.385 [2024-10-13 14:35:01.855694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.385 [2024-10-13 14:35:01.865023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.385 [2024-10-13 14:35:01.865522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.385 [2024-10-13 14:35:01.865537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.385 [2024-10-13 14:35:01.865542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.385 [2024-10-13 14:35:01.865694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.385 [2024-10-13 14:35:01.865844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.385 [2024-10-13 14:35:01.865850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.385 [2024-10-13 14:35:01.865857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.385 [2024-10-13 14:35:01.868295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.385 [2024-10-13 14:35:01.877617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.385 [2024-10-13 14:35:01.878099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.385 [2024-10-13 14:35:01.878112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.385 [2024-10-13 14:35:01.878117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.385 [2024-10-13 14:35:01.878269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.385 [2024-10-13 14:35:01.878419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.386 [2024-10-13 14:35:01.878425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.386 [2024-10-13 14:35:01.878433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.386 [2024-10-13 14:35:01.880869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.386 [2024-10-13 14:35:01.890337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.386 [2024-10-13 14:35:01.890788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.386 [2024-10-13 14:35:01.890799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.386 [2024-10-13 14:35:01.890804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.386 [2024-10-13 14:35:01.890955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.386 [2024-10-13 14:35:01.891110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.386 [2024-10-13 14:35:01.891116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.386 [2024-10-13 14:35:01.891122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.386 [2024-10-13 14:35:01.893555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.386 [2024-10-13 14:35:01.903018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.386 [2024-10-13 14:35:01.903467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.386 [2024-10-13 14:35:01.903479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.386 [2024-10-13 14:35:01.903484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.386 [2024-10-13 14:35:01.903634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.386 [2024-10-13 14:35:01.903785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.386 [2024-10-13 14:35:01.903791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.386 [2024-10-13 14:35:01.903795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.386 [2024-10-13 14:35:01.906229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.386 [2024-10-13 14:35:01.915694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.386 [2024-10-13 14:35:01.916116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.386 [2024-10-13 14:35:01.916128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.386 [2024-10-13 14:35:01.916134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.386 [2024-10-13 14:35:01.916284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.386 [2024-10-13 14:35:01.916436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.386 [2024-10-13 14:35:01.916442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.386 [2024-10-13 14:35:01.916448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.386 [2024-10-13 14:35:01.918881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.386 [2024-10-13 14:35:01.928357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.386 [2024-10-13 14:35:01.928699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.386 [2024-10-13 14:35:01.928710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.386 [2024-10-13 14:35:01.928715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.386 [2024-10-13 14:35:01.928865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.386 [2024-10-13 14:35:01.929017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.386 [2024-10-13 14:35:01.929023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.386 [2024-10-13 14:35:01.929028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.386 [2024-10-13 14:35:01.931464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.386 [2024-10-13 14:35:01.941072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.386 [2024-10-13 14:35:01.941525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.386 [2024-10-13 14:35:01.941536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.386 [2024-10-13 14:35:01.941541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.386 [2024-10-13 14:35:01.941692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.386 [2024-10-13 14:35:01.941842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.386 [2024-10-13 14:35:01.941848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.386 [2024-10-13 14:35:01.941853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.386 [2024-10-13 14:35:01.944288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.386 [2024-10-13 14:35:01.953756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.386 [2024-10-13 14:35:01.954139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.386 [2024-10-13 14:35:01.954152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.386 [2024-10-13 14:35:01.954157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.386 [2024-10-13 14:35:01.954308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.386 [2024-10-13 14:35:01.954459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.386 [2024-10-13 14:35:01.954465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.386 [2024-10-13 14:35:01.954470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.386 [2024-10-13 14:35:01.956902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.386 [2024-10-13 14:35:01.966370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.386 [2024-10-13 14:35:01.966819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.386 [2024-10-13 14:35:01.966831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.386 [2024-10-13 14:35:01.966836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.386 [2024-10-13 14:35:01.966987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.386 [2024-10-13 14:35:01.967147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.386 [2024-10-13 14:35:01.967153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.386 [2024-10-13 14:35:01.967158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.386 [2024-10-13 14:35:01.969595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.386 [2024-10-13 14:35:01.979060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.386 [2024-10-13 14:35:01.979565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.386 [2024-10-13 14:35:01.979577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.386 [2024-10-13 14:35:01.979583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.386 [2024-10-13 14:35:01.979733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.386 [2024-10-13 14:35:01.979884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.386 [2024-10-13 14:35:01.979889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.386 [2024-10-13 14:35:01.979894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.386 [2024-10-13 14:35:01.982331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.386 [2024-10-13 14:35:01.991659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.386 [2024-10-13 14:35:01.992152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.386 [2024-10-13 14:35:01.992164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.386 [2024-10-13 14:35:01.992170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.386 [2024-10-13 14:35:01.992320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.386 [2024-10-13 14:35:01.992471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.386 [2024-10-13 14:35:01.992476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.386 [2024-10-13 14:35:01.992481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.386 [2024-10-13 14:35:01.994914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.386 [2024-10-13 14:35:02.004380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.386 [2024-10-13 14:35:02.004698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.386 [2024-10-13 14:35:02.004710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.386 [2024-10-13 14:35:02.004715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.386 [2024-10-13 14:35:02.004866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.386 [2024-10-13 14:35:02.005016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.386 [2024-10-13 14:35:02.005022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.386 [2024-10-13 14:35:02.005027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.386 [2024-10-13 14:35:02.007471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.386 [2024-10-13 14:35:02.017079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.386 [2024-10-13 14:35:02.017630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.386 [2024-10-13 14:35:02.017661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.386 [2024-10-13 14:35:02.017670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.386 [2024-10-13 14:35:02.017836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.387 [2024-10-13 14:35:02.017990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.387 [2024-10-13 14:35:02.017996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.387 [2024-10-13 14:35:02.018001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.387 [2024-10-13 14:35:02.020451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.387 [2024-10-13 14:35:02.029784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.387 [2024-10-13 14:35:02.030293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.387 [2024-10-13 14:35:02.030308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.387 [2024-10-13 14:35:02.030314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.387 [2024-10-13 14:35:02.030465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.387 [2024-10-13 14:35:02.030616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.387 [2024-10-13 14:35:02.030622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.387 [2024-10-13 14:35:02.030627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.387 [2024-10-13 14:35:02.033058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.387 6984.75 IOPS, 27.28 MiB/s [2024-10-13T12:35:02.094Z] [2024-10-13 14:35:02.043231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.387 [2024-10-13 14:35:02.043679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.387 [2024-10-13 14:35:02.043691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.387 [2024-10-13 14:35:02.043697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.387 [2024-10-13 14:35:02.043847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.387 [2024-10-13 14:35:02.043998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.387 [2024-10-13 14:35:02.044004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.387 [2024-10-13 14:35:02.044008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.387 [2024-10-13 14:35:02.046445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.387 [2024-10-13 14:35:02.055915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.387 [2024-10-13 14:35:02.056393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.387 [2024-10-13 14:35:02.056428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.387 [2024-10-13 14:35:02.056437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.387 [2024-10-13 14:35:02.056603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.387 [2024-10-13 14:35:02.056756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.387 [2024-10-13 14:35:02.056763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.387 [2024-10-13 14:35:02.056768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.387 [2024-10-13 14:35:02.059214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.387 [2024-10-13 14:35:02.068546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.387 [2024-10-13 14:35:02.069156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.387 [2024-10-13 14:35:02.069189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.387 [2024-10-13 14:35:02.069197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.387 [2024-10-13 14:35:02.069366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.387 [2024-10-13 14:35:02.069520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.387 [2024-10-13 14:35:02.069526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.387 [2024-10-13 14:35:02.069532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.387 [2024-10-13 14:35:02.071975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.387 [2024-10-13 14:35:02.081160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.387 [2024-10-13 14:35:02.081645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.387 [2024-10-13 14:35:02.081659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.387 [2024-10-13 14:35:02.081665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.387 [2024-10-13 14:35:02.081816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.387 [2024-10-13 14:35:02.081967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.387 [2024-10-13 14:35:02.081973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.387 [2024-10-13 14:35:02.081978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.387 [2024-10-13 14:35:02.084417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.649 [2024-10-13 14:35:02.093877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.649 [2024-10-13 14:35:02.094402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.649 [2024-10-13 14:35:02.094433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.649 [2024-10-13 14:35:02.094441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.649 [2024-10-13 14:35:02.094608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.649 [2024-10-13 14:35:02.094765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.649 [2024-10-13 14:35:02.094771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.649 [2024-10-13 14:35:02.094777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.649 [2024-10-13 14:35:02.097222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.649 [2024-10-13 14:35:02.106550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.649 [2024-10-13 14:35:02.107141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.649 [2024-10-13 14:35:02.107172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.649 [2024-10-13 14:35:02.107180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.649 [2024-10-13 14:35:02.107349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.649 [2024-10-13 14:35:02.107502] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.649 [2024-10-13 14:35:02.107509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.649 [2024-10-13 14:35:02.107514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.649 [2024-10-13 14:35:02.109959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.649 [2024-10-13 14:35:02.119144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.649 [2024-10-13 14:35:02.119609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.649 [2024-10-13 14:35:02.119640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.649 [2024-10-13 14:35:02.119649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.649 [2024-10-13 14:35:02.119815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.650 [2024-10-13 14:35:02.119968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.650 [2024-10-13 14:35:02.119975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.650 [2024-10-13 14:35:02.119980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.650 [2024-10-13 14:35:02.122423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.650 [2024-10-13 14:35:02.131755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.650 [2024-10-13 14:35:02.132124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.650 [2024-10-13 14:35:02.132139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.650 [2024-10-13 14:35:02.132145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.650 [2024-10-13 14:35:02.132297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.650 [2024-10-13 14:35:02.132447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.650 [2024-10-13 14:35:02.132453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.650 [2024-10-13 14:35:02.132458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.650 [2024-10-13 14:35:02.134897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.650 [2024-10-13 14:35:02.144364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.650 [2024-10-13 14:35:02.144720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.650 [2024-10-13 14:35:02.144732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.650 [2024-10-13 14:35:02.144738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.650 [2024-10-13 14:35:02.144889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.650 [2024-10-13 14:35:02.145048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.650 [2024-10-13 14:35:02.145054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.650 [2024-10-13 14:35:02.145059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.650 [2024-10-13 14:35:02.147504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.650 [2024-10-13 14:35:02.156968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.650 [2024-10-13 14:35:02.157538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.650 [2024-10-13 14:35:02.157569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.650 [2024-10-13 14:35:02.157577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.650 [2024-10-13 14:35:02.157745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.650 [2024-10-13 14:35:02.157899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.650 [2024-10-13 14:35:02.157906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.650 [2024-10-13 14:35:02.157911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.650 [2024-10-13 14:35:02.160354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.650 [2024-10-13 14:35:02.169674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.650 [2024-10-13 14:35:02.170194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.650 [2024-10-13 14:35:02.170226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.650 [2024-10-13 14:35:02.170235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.650 [2024-10-13 14:35:02.170403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.650 [2024-10-13 14:35:02.170557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.650 [2024-10-13 14:35:02.170563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.650 [2024-10-13 14:35:02.170569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.650 [2024-10-13 14:35:02.173013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.650 [2024-10-13 14:35:02.182346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.650 [2024-10-13 14:35:02.182783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.650 [2024-10-13 14:35:02.182813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.650 [2024-10-13 14:35:02.182826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.650 [2024-10-13 14:35:02.182992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.650 [2024-10-13 14:35:02.183155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.650 [2024-10-13 14:35:02.183164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.650 [2024-10-13 14:35:02.183170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.650 [2024-10-13 14:35:02.185623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.650 [2024-10-13 14:35:02.194945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.650 [2024-10-13 14:35:02.195489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.650 [2024-10-13 14:35:02.195504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.650 [2024-10-13 14:35:02.195509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.650 [2024-10-13 14:35:02.195661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.650 [2024-10-13 14:35:02.195811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.650 [2024-10-13 14:35:02.195817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.650 [2024-10-13 14:35:02.195822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.650 [2024-10-13 14:35:02.198258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.650 [2024-10-13 14:35:02.207579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.650 [2024-10-13 14:35:02.208057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.650 [2024-10-13 14:35:02.208073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.650 [2024-10-13 14:35:02.208078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.650 [2024-10-13 14:35:02.208229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.650 [2024-10-13 14:35:02.208380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.650 [2024-10-13 14:35:02.208385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.650 [2024-10-13 14:35:02.208390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.650 [2024-10-13 14:35:02.210822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.650 [2024-10-13 14:35:02.220284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.650 [2024-10-13 14:35:02.220700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.650 [2024-10-13 14:35:02.220711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.650 [2024-10-13 14:35:02.220717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.650 [2024-10-13 14:35:02.220867] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.650 [2024-10-13 14:35:02.221017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.650 [2024-10-13 14:35:02.221026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.650 [2024-10-13 14:35:02.221031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.650 [2024-10-13 14:35:02.223470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.650 [2024-10-13 14:35:02.232945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.650 [2024-10-13 14:35:02.233578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.650 [2024-10-13 14:35:02.233609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.650 [2024-10-13 14:35:02.233618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.650 [2024-10-13 14:35:02.233784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.650 [2024-10-13 14:35:02.233938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.650 [2024-10-13 14:35:02.233944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.650 [2024-10-13 14:35:02.233949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.650 [2024-10-13 14:35:02.236398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.650 [2024-10-13 14:35:02.245585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.650 [2024-10-13 14:35:02.246137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.650 [2024-10-13 14:35:02.246168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.650 [2024-10-13 14:35:02.246177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.650 [2024-10-13 14:35:02.246346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.650 [2024-10-13 14:35:02.246499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.650 [2024-10-13 14:35:02.246505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.650 [2024-10-13 14:35:02.246511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.650 [2024-10-13 14:35:02.248960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.650 [2024-10-13 14:35:02.258288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.650 [2024-10-13 14:35:02.258774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.651 [2024-10-13 14:35:02.258789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.651 [2024-10-13 14:35:02.258794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.651 [2024-10-13 14:35:02.258946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.651 [2024-10-13 14:35:02.259102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.651 [2024-10-13 14:35:02.259108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.651 [2024-10-13 14:35:02.259113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.651 [2024-10-13 14:35:02.261548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.651 [2024-10-13 14:35:02.270876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.651 [2024-10-13 14:35:02.271433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.651 [2024-10-13 14:35:02.271464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.651 [2024-10-13 14:35:02.271473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.651 [2024-10-13 14:35:02.271639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.651 [2024-10-13 14:35:02.271793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.651 [2024-10-13 14:35:02.271799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.651 [2024-10-13 14:35:02.271805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.651 [2024-10-13 14:35:02.274249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.651 [2024-10-13 14:35:02.283582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.651 [2024-10-13 14:35:02.284134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.651 [2024-10-13 14:35:02.284165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.651 [2024-10-13 14:35:02.284174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.651 [2024-10-13 14:35:02.284343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.651 [2024-10-13 14:35:02.284496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.651 [2024-10-13 14:35:02.284502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.651 [2024-10-13 14:35:02.284508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.651 [2024-10-13 14:35:02.286953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.651 [2024-10-13 14:35:02.296284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.651 [2024-10-13 14:35:02.296843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.651 [2024-10-13 14:35:02.296874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.651 [2024-10-13 14:35:02.296883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.651 [2024-10-13 14:35:02.297049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.651 [2024-10-13 14:35:02.297210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.651 [2024-10-13 14:35:02.297216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.651 [2024-10-13 14:35:02.297222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.651 [2024-10-13 14:35:02.299661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.651 [2024-10-13 14:35:02.308984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.651 [2024-10-13 14:35:02.309532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.651 [2024-10-13 14:35:02.309563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.651 [2024-10-13 14:35:02.309572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.651 [2024-10-13 14:35:02.309742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.651 [2024-10-13 14:35:02.309896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.651 [2024-10-13 14:35:02.309902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.651 [2024-10-13 14:35:02.309908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.651 [2024-10-13 14:35:02.312353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.651 [2024-10-13 14:35:02.321678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.651 [2024-10-13 14:35:02.322200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.651 [2024-10-13 14:35:02.322216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.651 [2024-10-13 14:35:02.322221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.651 [2024-10-13 14:35:02.322373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.651 [2024-10-13 14:35:02.322524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.651 [2024-10-13 14:35:02.322530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.651 [2024-10-13 14:35:02.322535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.651 [2024-10-13 14:35:02.324967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.651 [2024-10-13 14:35:02.334305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.651 [2024-10-13 14:35:02.334792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.651 [2024-10-13 14:35:02.334805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.651 [2024-10-13 14:35:02.334810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.651 [2024-10-13 14:35:02.334961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.651 [2024-10-13 14:35:02.335116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.651 [2024-10-13 14:35:02.335122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.651 [2024-10-13 14:35:02.335127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.651 [2024-10-13 14:35:02.337557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.651 [2024-10-13 14:35:02.347016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.651 [2024-10-13 14:35:02.347528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.651 [2024-10-13 14:35:02.347540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.651 [2024-10-13 14:35:02.347545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.651 [2024-10-13 14:35:02.347696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.651 [2024-10-13 14:35:02.347846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.651 [2024-10-13 14:35:02.347852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.651 [2024-10-13 14:35:02.347860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.651 [2024-10-13 14:35:02.350298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.913 [2024-10-13 14:35:02.359620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.913 [2024-10-13 14:35:02.360070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.913 [2024-10-13 14:35:02.360082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.913 [2024-10-13 14:35:02.360087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.913 [2024-10-13 14:35:02.360238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.913 [2024-10-13 14:35:02.360389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.913 [2024-10-13 14:35:02.360394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.913 [2024-10-13 14:35:02.360399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.913 [2024-10-13 14:35:02.362832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.913 [2024-10-13 14:35:02.372317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.913 [2024-10-13 14:35:02.372808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.913 [2024-10-13 14:35:02.372821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.913 [2024-10-13 14:35:02.372827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.913 [2024-10-13 14:35:02.372978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.913 [2024-10-13 14:35:02.373132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.913 [2024-10-13 14:35:02.373139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.914 [2024-10-13 14:35:02.373144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.914 [2024-10-13 14:35:02.375577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.914 [2024-10-13 14:35:02.385045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.914 [2024-10-13 14:35:02.385640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.914 [2024-10-13 14:35:02.385671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.914 [2024-10-13 14:35:02.385679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.914 [2024-10-13 14:35:02.385846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.914 [2024-10-13 14:35:02.385999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.914 [2024-10-13 14:35:02.386006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.914 [2024-10-13 14:35:02.386011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.914 [2024-10-13 14:35:02.388453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.914 [2024-10-13 14:35:02.397637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.914 [2024-10-13 14:35:02.398232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.914 [2024-10-13 14:35:02.398263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.914 [2024-10-13 14:35:02.398272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.914 [2024-10-13 14:35:02.398437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.914 [2024-10-13 14:35:02.398591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.914 [2024-10-13 14:35:02.398598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.914 [2024-10-13 14:35:02.398603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.914 [2024-10-13 14:35:02.401046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.914 [2024-10-13 14:35:02.410232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.914 [2024-10-13 14:35:02.410725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.914 [2024-10-13 14:35:02.410756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.914 [2024-10-13 14:35:02.410764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.914 [2024-10-13 14:35:02.410930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.914 [2024-10-13 14:35:02.411090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.914 [2024-10-13 14:35:02.411097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.914 [2024-10-13 14:35:02.411102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.914 [2024-10-13 14:35:02.413541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.914 [2024-10-13 14:35:02.422861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.914 [2024-10-13 14:35:02.423488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.914 [2024-10-13 14:35:02.423520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.914 [2024-10-13 14:35:02.423529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.914 [2024-10-13 14:35:02.423695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.914 [2024-10-13 14:35:02.423849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.914 [2024-10-13 14:35:02.423855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.914 [2024-10-13 14:35:02.423861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.914 [2024-10-13 14:35:02.426314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.914 [2024-10-13 14:35:02.435499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.914 [2024-10-13 14:35:02.436046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.914 [2024-10-13 14:35:02.436082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.914 [2024-10-13 14:35:02.436092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.914 [2024-10-13 14:35:02.436262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.914 [2024-10-13 14:35:02.436419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.914 [2024-10-13 14:35:02.436426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.914 [2024-10-13 14:35:02.436433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.914 [2024-10-13 14:35:02.438872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.914 [2024-10-13 14:35:02.448207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.914 [2024-10-13 14:35:02.448693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.914 [2024-10-13 14:35:02.448707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.914 [2024-10-13 14:35:02.448713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.914 [2024-10-13 14:35:02.448865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.914 [2024-10-13 14:35:02.449016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.914 [2024-10-13 14:35:02.449021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.914 [2024-10-13 14:35:02.449026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.914 [2024-10-13 14:35:02.451464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.914 [2024-10-13 14:35:02.460923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.914 [2024-10-13 14:35:02.461483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.914 [2024-10-13 14:35:02.461514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.914 [2024-10-13 14:35:02.461522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.914 [2024-10-13 14:35:02.461688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.914 [2024-10-13 14:35:02.461842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.914 [2024-10-13 14:35:02.461848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.914 [2024-10-13 14:35:02.461853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.914 [2024-10-13 14:35:02.464298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.914 [2024-10-13 14:35:02.473617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.914 [2024-10-13 14:35:02.473956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.914 [2024-10-13 14:35:02.473970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.914 [2024-10-13 14:35:02.473976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.914 [2024-10-13 14:35:02.474131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.914 [2024-10-13 14:35:02.474282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.914 [2024-10-13 14:35:02.474288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.914 [2024-10-13 14:35:02.474293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.914 [2024-10-13 14:35:02.476729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.914 [2024-10-13 14:35:02.486335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.914 [2024-10-13 14:35:02.486782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.914 [2024-10-13 14:35:02.486794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.914 [2024-10-13 14:35:02.486800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.914 [2024-10-13 14:35:02.486950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.915 [2024-10-13 14:35:02.487108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.915 [2024-10-13 14:35:02.487114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.915 [2024-10-13 14:35:02.487119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.915 [2024-10-13 14:35:02.489552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.915 [2024-10-13 14:35:02.499014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.915 [2024-10-13 14:35:02.499570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.915 [2024-10-13 14:35:02.499601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.915 [2024-10-13 14:35:02.499609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.915 [2024-10-13 14:35:02.499776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.915 [2024-10-13 14:35:02.499929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.915 [2024-10-13 14:35:02.499936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.915 [2024-10-13 14:35:02.499941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.915 [2024-10-13 14:35:02.502386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.915 [2024-10-13 14:35:02.511601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.915 [2024-10-13 14:35:02.512109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.915 [2024-10-13 14:35:02.512131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.915 [2024-10-13 14:35:02.512137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.915 [2024-10-13 14:35:02.512294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.915 [2024-10-13 14:35:02.512447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.915 [2024-10-13 14:35:02.512453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.915 [2024-10-13 14:35:02.512458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.915 [2024-10-13 14:35:02.514898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.915 [2024-10-13 14:35:02.524214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.915 [2024-10-13 14:35:02.524799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.915 [2024-10-13 14:35:02.524833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.915 [2024-10-13 14:35:02.524841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.915 [2024-10-13 14:35:02.525008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.915 [2024-10-13 14:35:02.525175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.915 [2024-10-13 14:35:02.525183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.915 [2024-10-13 14:35:02.525188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.915 [2024-10-13 14:35:02.527628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.915 [2024-10-13 14:35:02.536799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.915 [2024-10-13 14:35:02.537372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.915 [2024-10-13 14:35:02.537403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.915 [2024-10-13 14:35:02.537412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.915 [2024-10-13 14:35:02.537578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.915 [2024-10-13 14:35:02.537731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.915 [2024-10-13 14:35:02.537738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.915 [2024-10-13 14:35:02.537743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.915 [2024-10-13 14:35:02.540187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.915 [2024-10-13 14:35:02.549511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.915 [2024-10-13 14:35:02.550102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.915 [2024-10-13 14:35:02.550133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.915 [2024-10-13 14:35:02.550142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.915 [2024-10-13 14:35:02.550311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.915 [2024-10-13 14:35:02.550464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.915 [2024-10-13 14:35:02.550470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.915 [2024-10-13 14:35:02.550476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.915 [2024-10-13 14:35:02.552919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.915 [2024-10-13 14:35:02.562096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.915 [2024-10-13 14:35:02.562668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.915 [2024-10-13 14:35:02.562699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.915 [2024-10-13 14:35:02.562707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.915 [2024-10-13 14:35:02.562873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.915 [2024-10-13 14:35:02.563030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.915 [2024-10-13 14:35:02.563037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.915 [2024-10-13 14:35:02.563042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.915 [2024-10-13 14:35:02.565488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.915 [2024-10-13 14:35:02.574806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.915 [2024-10-13 14:35:02.575352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.915 [2024-10-13 14:35:02.575383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.915 [2024-10-13 14:35:02.575392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.915 [2024-10-13 14:35:02.575558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.915 [2024-10-13 14:35:02.575711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.915 [2024-10-13 14:35:02.575718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.915 [2024-10-13 14:35:02.575723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.915 [2024-10-13 14:35:02.578168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.915 [2024-10-13 14:35:02.587486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.915 [2024-10-13 14:35:02.588053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.915 [2024-10-13 14:35:02.588089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.915 [2024-10-13 14:35:02.588097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.915 [2024-10-13 14:35:02.588263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.915 [2024-10-13 14:35:02.588417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.915 [2024-10-13 14:35:02.588423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.915 [2024-10-13 14:35:02.588429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.915 [2024-10-13 14:35:02.590869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.915 [2024-10-13 14:35:02.600193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.915 [2024-10-13 14:35:02.600652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.915 [2024-10-13 14:35:02.600682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.915 [2024-10-13 14:35:02.600691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.915 [2024-10-13 14:35:02.600857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.915 [2024-10-13 14:35:02.601011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.915 [2024-10-13 14:35:02.601017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.915 [2024-10-13 14:35:02.601023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.915 [2024-10-13 14:35:02.603467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:58.915 [2024-10-13 14:35:02.612793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:58.915 [2024-10-13 14:35:02.613248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:58.915 [2024-10-13 14:35:02.613263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:58.915 [2024-10-13 14:35:02.613269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:58.916 [2024-10-13 14:35:02.613420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:58.916 [2024-10-13 14:35:02.613571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:58.916 [2024-10-13 14:35:02.613577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:58.916 [2024-10-13 14:35:02.613582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:58.916 [2024-10-13 14:35:02.616017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.177 [2024-10-13 14:35:02.625496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.177 [2024-10-13 14:35:02.625984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.177 [2024-10-13 14:35:02.625996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.177 [2024-10-13 14:35:02.626001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.177 [2024-10-13 14:35:02.626158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.177 [2024-10-13 14:35:02.626309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.177 [2024-10-13 14:35:02.626315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.177 [2024-10-13 14:35:02.626320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.177 [2024-10-13 14:35:02.628750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.177 [2024-10-13 14:35:02.638207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.177 [2024-10-13 14:35:02.638684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.177 [2024-10-13 14:35:02.638696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.177 [2024-10-13 14:35:02.638701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.177 [2024-10-13 14:35:02.638852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.177 [2024-10-13 14:35:02.639002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.177 [2024-10-13 14:35:02.639008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.177 [2024-10-13 14:35:02.639013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.177 [2024-10-13 14:35:02.641447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.177 [2024-10-13 14:35:02.650924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.177 [2024-10-13 14:35:02.651473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.177 [2024-10-13 14:35:02.651504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.177 [2024-10-13 14:35:02.651516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.177 [2024-10-13 14:35:02.651682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.177 [2024-10-13 14:35:02.651835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.177 [2024-10-13 14:35:02.651842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.177 [2024-10-13 14:35:02.651847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.177 [2024-10-13 14:35:02.654292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.177 [2024-10-13 14:35:02.663612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.177 [2024-10-13 14:35:02.664208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.178 [2024-10-13 14:35:02.664240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.178 [2024-10-13 14:35:02.664249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.178 [2024-10-13 14:35:02.664415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.178 [2024-10-13 14:35:02.664569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.178 [2024-10-13 14:35:02.664575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.178 [2024-10-13 14:35:02.664581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.178 [2024-10-13 14:35:02.667025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.178 [2024-10-13 14:35:02.676201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.178 [2024-10-13 14:35:02.676781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.178 [2024-10-13 14:35:02.676812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.178 [2024-10-13 14:35:02.676821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.178 [2024-10-13 14:35:02.676987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.178 [2024-10-13 14:35:02.677150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.178 [2024-10-13 14:35:02.677158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.178 [2024-10-13 14:35:02.677163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.178 [2024-10-13 14:35:02.679603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.178 [2024-10-13 14:35:02.688793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.178 [2024-10-13 14:35:02.689379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.178 [2024-10-13 14:35:02.689411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.178 [2024-10-13 14:35:02.689420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.178 [2024-10-13 14:35:02.689587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.178 [2024-10-13 14:35:02.689740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.178 [2024-10-13 14:35:02.689751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.178 [2024-10-13 14:35:02.689756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.178 [2024-10-13 14:35:02.692206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.178 [2024-10-13 14:35:02.701408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.178 [2024-10-13 14:35:02.701977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.178 [2024-10-13 14:35:02.702007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.178 [2024-10-13 14:35:02.702016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.178 [2024-10-13 14:35:02.702193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.178 [2024-10-13 14:35:02.702348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.178 [2024-10-13 14:35:02.702354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.178 [2024-10-13 14:35:02.702359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.178 [2024-10-13 14:35:02.704799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.178 [2024-10-13 14:35:02.714124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.178 [2024-10-13 14:35:02.714645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.178 [2024-10-13 14:35:02.714676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.178 [2024-10-13 14:35:02.714685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.178 [2024-10-13 14:35:02.714852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.178 [2024-10-13 14:35:02.715005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.178 [2024-10-13 14:35:02.715012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.178 [2024-10-13 14:35:02.715017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.178 [2024-10-13 14:35:02.717460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.178 [2024-10-13 14:35:02.726787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.178 [2024-10-13 14:35:02.727361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.178 [2024-10-13 14:35:02.727392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.178 [2024-10-13 14:35:02.727401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.178 [2024-10-13 14:35:02.727567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.178 [2024-10-13 14:35:02.727721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.178 [2024-10-13 14:35:02.727727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.178 [2024-10-13 14:35:02.727732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.178 [2024-10-13 14:35:02.730177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.178 [2024-10-13 14:35:02.739496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.178 [2024-10-13 14:35:02.740068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.178 [2024-10-13 14:35:02.740099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.178 [2024-10-13 14:35:02.740108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.178 [2024-10-13 14:35:02.740277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.178 [2024-10-13 14:35:02.740431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.178 [2024-10-13 14:35:02.740437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.178 [2024-10-13 14:35:02.740442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.178 [2024-10-13 14:35:02.742884] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.178 [2024-10-13 14:35:02.752212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.178 [2024-10-13 14:35:02.752778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.178 [2024-10-13 14:35:02.752808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.178 [2024-10-13 14:35:02.752817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.178 [2024-10-13 14:35:02.752983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.178 [2024-10-13 14:35:02.753144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.178 [2024-10-13 14:35:02.753151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.178 [2024-10-13 14:35:02.753156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.178 [2024-10-13 14:35:02.755592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.178 [2024-10-13 14:35:02.764847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.178 [2024-10-13 14:35:02.765439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.178 [2024-10-13 14:35:02.765470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.178 [2024-10-13 14:35:02.765479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.178 [2024-10-13 14:35:02.765645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.178 [2024-10-13 14:35:02.765798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.178 [2024-10-13 14:35:02.765804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.178 [2024-10-13 14:35:02.765810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.178 [2024-10-13 14:35:02.768257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.178 [2024-10-13 14:35:02.777438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.178 [2024-10-13 14:35:02.778020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.178 [2024-10-13 14:35:02.778051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.178 [2024-10-13 14:35:02.778060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.178 [2024-10-13 14:35:02.778236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.178 [2024-10-13 14:35:02.778390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.178 [2024-10-13 14:35:02.778397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.178 [2024-10-13 14:35:02.778402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.178 [2024-10-13 14:35:02.780838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.178 [2024-10-13 14:35:02.790158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.178 [2024-10-13 14:35:02.790725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.178 [2024-10-13 14:35:02.790756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.178 [2024-10-13 14:35:02.790765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.178 [2024-10-13 14:35:02.790932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.178 [2024-10-13 14:35:02.791092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.178 [2024-10-13 14:35:02.791099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.178 [2024-10-13 14:35:02.791104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.178 [2024-10-13 14:35:02.793543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.179 [2024-10-13 14:35:02.802866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.179 [2024-10-13 14:35:02.803466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.179 [2024-10-13 14:35:02.803497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.179 [2024-10-13 14:35:02.803506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.179 [2024-10-13 14:35:02.803672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.179 [2024-10-13 14:35:02.803826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.179 [2024-10-13 14:35:02.803832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.179 [2024-10-13 14:35:02.803837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.179 [2024-10-13 14:35:02.806281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.179 [2024-10-13 14:35:02.815454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.179 [2024-10-13 14:35:02.816024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.179 [2024-10-13 14:35:02.816054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.179 [2024-10-13 14:35:02.816069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.179 [2024-10-13 14:35:02.816236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.179 [2024-10-13 14:35:02.816390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.179 [2024-10-13 14:35:02.816396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.179 [2024-10-13 14:35:02.816405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.179 [2024-10-13 14:35:02.818844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.179 [2024-10-13 14:35:02.828148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.179 [2024-10-13 14:35:02.828722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.179 [2024-10-13 14:35:02.828752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.179 [2024-10-13 14:35:02.828761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.179 [2024-10-13 14:35:02.828928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.179 [2024-10-13 14:35:02.829090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.179 [2024-10-13 14:35:02.829097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.179 [2024-10-13 14:35:02.829102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.179 [2024-10-13 14:35:02.831541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.179 [2024-10-13 14:35:02.840860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.179 [2024-10-13 14:35:02.841442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.179 [2024-10-13 14:35:02.841473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.179 [2024-10-13 14:35:02.841482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.179 [2024-10-13 14:35:02.841648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.179 [2024-10-13 14:35:02.841802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.179 [2024-10-13 14:35:02.841808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.179 [2024-10-13 14:35:02.841813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.179 [2024-10-13 14:35:02.844257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.179 [2024-10-13 14:35:02.853585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.179 [2024-10-13 14:35:02.854138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.179 [2024-10-13 14:35:02.854169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.179 [2024-10-13 14:35:02.854177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.179 [2024-10-13 14:35:02.854347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.179 [2024-10-13 14:35:02.854500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.179 [2024-10-13 14:35:02.854506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.179 [2024-10-13 14:35:02.854512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.179 [2024-10-13 14:35:02.856956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.179 [2024-10-13 14:35:02.866277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.179 [2024-10-13 14:35:02.866766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.179 [2024-10-13 14:35:02.866784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.179 [2024-10-13 14:35:02.866790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.179 [2024-10-13 14:35:02.866942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.179 [2024-10-13 14:35:02.867098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.179 [2024-10-13 14:35:02.867104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.179 [2024-10-13 14:35:02.867109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.179 [2024-10-13 14:35:02.869545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.179 [2024-10-13 14:35:02.878875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.179 [2024-10-13 14:35:02.879429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.179 [2024-10-13 14:35:02.879460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.179 [2024-10-13 14:35:02.879469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.179 [2024-10-13 14:35:02.879635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.179 [2024-10-13 14:35:02.879789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.179 [2024-10-13 14:35:02.879795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.179 [2024-10-13 14:35:02.879801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.441 [2024-10-13 14:35:02.882248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.441 [2024-10-13 14:35:02.891474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.441 [2024-10-13 14:35:02.891925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.441 [2024-10-13 14:35:02.891939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.441 [2024-10-13 14:35:02.891945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.441 [2024-10-13 14:35:02.892101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.441 [2024-10-13 14:35:02.892253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.441 [2024-10-13 14:35:02.892259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.441 [2024-10-13 14:35:02.892264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.441 [2024-10-13 14:35:02.894698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.441 [2024-10-13 14:35:02.904175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.441 [2024-10-13 14:35:02.904760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.441 [2024-10-13 14:35:02.904790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.441 [2024-10-13 14:35:02.904799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.441 [2024-10-13 14:35:02.904966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.441 [2024-10-13 14:35:02.905135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.441 [2024-10-13 14:35:02.905142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.441 [2024-10-13 14:35:02.905147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.441 [2024-10-13 14:35:02.907588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.441 [2024-10-13 14:35:02.916774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.441 [2024-10-13 14:35:02.917377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.441 [2024-10-13 14:35:02.917408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.441 [2024-10-13 14:35:02.917417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.441 [2024-10-13 14:35:02.917583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.441 [2024-10-13 14:35:02.917737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.441 [2024-10-13 14:35:02.917743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.441 [2024-10-13 14:35:02.917749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.441 [2024-10-13 14:35:02.920192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.441 [2024-10-13 14:35:02.929388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.441 [2024-10-13 14:35:02.929971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.441 [2024-10-13 14:35:02.930002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.441 [2024-10-13 14:35:02.930012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.441 [2024-10-13 14:35:02.930190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.441 [2024-10-13 14:35:02.930344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.441 [2024-10-13 14:35:02.930350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.441 [2024-10-13 14:35:02.930356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.441 [2024-10-13 14:35:02.932797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.441 [2024-10-13 14:35:02.941990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.441 [2024-10-13 14:35:02.942569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.441 [2024-10-13 14:35:02.942599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.441 [2024-10-13 14:35:02.942609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.441 [2024-10-13 14:35:02.942775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.441 [2024-10-13 14:35:02.942929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.441 [2024-10-13 14:35:02.942938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.441 [2024-10-13 14:35:02.942944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.441 [2024-10-13 14:35:02.945392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.441 [2024-10-13 14:35:02.954584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.441 [2024-10-13 14:35:02.955073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.441 [2024-10-13 14:35:02.955088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.441 [2024-10-13 14:35:02.955094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.441 [2024-10-13 14:35:02.955245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.441 [2024-10-13 14:35:02.955396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.441 [2024-10-13 14:35:02.955402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.441 [2024-10-13 14:35:02.955407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.441 [2024-10-13 14:35:02.957839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.441 [2024-10-13 14:35:02.967304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.441 [2024-10-13 14:35:02.967763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.441 [2024-10-13 14:35:02.967776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.442 [2024-10-13 14:35:02.967781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.442 [2024-10-13 14:35:02.967932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.442 [2024-10-13 14:35:02.968086] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.442 [2024-10-13 14:35:02.968093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.442 [2024-10-13 14:35:02.968098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.442 [2024-10-13 14:35:02.970530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.442 [2024-10-13 14:35:02.979990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.442 [2024-10-13 14:35:02.980482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.442 [2024-10-13 14:35:02.980495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.442 [2024-10-13 14:35:02.980500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.442 [2024-10-13 14:35:02.980651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.442 [2024-10-13 14:35:02.980802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.442 [2024-10-13 14:35:02.980807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.442 [2024-10-13 14:35:02.980812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.442 [2024-10-13 14:35:02.983247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.442 [2024-10-13 14:35:02.992718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.442 [2024-10-13 14:35:02.993296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.442 [2024-10-13 14:35:02.993327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.442 [2024-10-13 14:35:02.993339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.442 [2024-10-13 14:35:02.993506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.442 [2024-10-13 14:35:02.993659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.442 [2024-10-13 14:35:02.993666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.442 [2024-10-13 14:35:02.993671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.442 [2024-10-13 14:35:02.996120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.442 [2024-10-13 14:35:03.005312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.442 [2024-10-13 14:35:03.005886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.442 [2024-10-13 14:35:03.005917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.442 [2024-10-13 14:35:03.005926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.442 [2024-10-13 14:35:03.006099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.442 [2024-10-13 14:35:03.006253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.442 [2024-10-13 14:35:03.006259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.442 [2024-10-13 14:35:03.006265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.442 [2024-10-13 14:35:03.008709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.442 [2024-10-13 14:35:03.017905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.442 [2024-10-13 14:35:03.018352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.442 [2024-10-13 14:35:03.018367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.442 [2024-10-13 14:35:03.018373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.442 [2024-10-13 14:35:03.018524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.442 [2024-10-13 14:35:03.018675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.442 [2024-10-13 14:35:03.018681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.442 [2024-10-13 14:35:03.018686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.442 [2024-10-13 14:35:03.021124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.442 [2024-10-13 14:35:03.030601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.442 [2024-10-13 14:35:03.031136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.442 [2024-10-13 14:35:03.031167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.442 [2024-10-13 14:35:03.031175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.442 [2024-10-13 14:35:03.031344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.442 [2024-10-13 14:35:03.031498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.442 [2024-10-13 14:35:03.031507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.442 [2024-10-13 14:35:03.031513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.442 [2024-10-13 14:35:03.033959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.442 5587.80 IOPS, 21.83 MiB/s [2024-10-13T12:35:03.149Z] [2024-10-13 14:35:03.043277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.442 [2024-10-13 14:35:03.043844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.442 [2024-10-13 14:35:03.043875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.442 [2024-10-13 14:35:03.043884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.442 [2024-10-13 14:35:03.044050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.442 [2024-10-13 14:35:03.044211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.442 [2024-10-13 14:35:03.044218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.442 [2024-10-13 14:35:03.044224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.442 [2024-10-13 14:35:03.046661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.442 [2024-10-13 14:35:03.055985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.442 [2024-10-13 14:35:03.056559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.442 [2024-10-13 14:35:03.056590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.442 [2024-10-13 14:35:03.056599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.442 [2024-10-13 14:35:03.056765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.442 [2024-10-13 14:35:03.056918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.442 [2024-10-13 14:35:03.056925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.442 [2024-10-13 14:35:03.056930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.442 [2024-10-13 14:35:03.059375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.442 [2024-10-13 14:35:03.068694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.442 [2024-10-13 14:35:03.069172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.442 [2024-10-13 14:35:03.069187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.442 [2024-10-13 14:35:03.069192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.442 [2024-10-13 14:35:03.069344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.442 [2024-10-13 14:35:03.069494] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.442 [2024-10-13 14:35:03.069500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.442 [2024-10-13 14:35:03.069505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.442 [2024-10-13 14:35:03.071938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.442 [2024-10-13 14:35:03.081400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.442 [2024-10-13 14:35:03.081974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.442 [2024-10-13 14:35:03.082004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.442 [2024-10-13 14:35:03.082013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.442 [2024-10-13 14:35:03.082190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.442 [2024-10-13 14:35:03.082344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.442 [2024-10-13 14:35:03.082350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.442 [2024-10-13 14:35:03.082356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.442 [2024-10-13 14:35:03.084791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.442 [2024-10-13 14:35:03.094107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.442 [2024-10-13 14:35:03.094678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.442 [2024-10-13 14:35:03.094709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.442 [2024-10-13 14:35:03.094717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.442 [2024-10-13 14:35:03.094884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.442 [2024-10-13 14:35:03.095037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.442 [2024-10-13 14:35:03.095043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.442 [2024-10-13 14:35:03.095049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.442 [2024-10-13 14:35:03.097494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.442 [2024-10-13 14:35:03.106809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.443 [2024-10-13 14:35:03.107406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.443 [2024-10-13 14:35:03.107437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.443 [2024-10-13 14:35:03.107446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.443 [2024-10-13 14:35:03.107612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.443 [2024-10-13 14:35:03.107765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.443 [2024-10-13 14:35:03.107772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.443 [2024-10-13 14:35:03.107777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.443 [2024-10-13 14:35:03.110222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.443 [2024-10-13 14:35:03.119397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.443 [2024-10-13 14:35:03.119961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.443 [2024-10-13 14:35:03.119991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.443 [2024-10-13 14:35:03.120003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.443 [2024-10-13 14:35:03.120177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.443 [2024-10-13 14:35:03.120331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.443 [2024-10-13 14:35:03.120337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.443 [2024-10-13 14:35:03.120343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.443 [2024-10-13 14:35:03.122779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.443 [2024-10-13 14:35:03.132105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.443 [2024-10-13 14:35:03.132674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.443 [2024-10-13 14:35:03.132704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.443 [2024-10-13 14:35:03.132713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.443 [2024-10-13 14:35:03.132879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.443 [2024-10-13 14:35:03.133032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.443 [2024-10-13 14:35:03.133038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.443 [2024-10-13 14:35:03.133044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.443 [2024-10-13 14:35:03.135488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.443 [2024-10-13 14:35:03.144816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.443 [2024-10-13 14:35:03.145386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.443 [2024-10-13 14:35:03.145417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.443 [2024-10-13 14:35:03.145426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.443 [2024-10-13 14:35:03.145595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.443 [2024-10-13 14:35:03.145749] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.443 [2024-10-13 14:35:03.145755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.443 [2024-10-13 14:35:03.145760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.705 [2024-10-13 14:35:03.148217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.705 [2024-10-13 14:35:03.157403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.705 [2024-10-13 14:35:03.157887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.705 [2024-10-13 14:35:03.157918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.705 [2024-10-13 14:35:03.157927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.705 [2024-10-13 14:35:03.158101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.705 [2024-10-13 14:35:03.158255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.705 [2024-10-13 14:35:03.158265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.705 [2024-10-13 14:35:03.158270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.705 [2024-10-13 14:35:03.160707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.705 [2024-10-13 14:35:03.170022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.705 [2024-10-13 14:35:03.170528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.705 [2024-10-13 14:35:03.170559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.705 [2024-10-13 14:35:03.170567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.705 [2024-10-13 14:35:03.170736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.705 [2024-10-13 14:35:03.170889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.705 [2024-10-13 14:35:03.170896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.705 [2024-10-13 14:35:03.170901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.705 [2024-10-13 14:35:03.173347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.705 [2024-10-13 14:35:03.182667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.705 [2024-10-13 14:35:03.183173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.705 [2024-10-13 14:35:03.183210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.705 [2024-10-13 14:35:03.183218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.705 [2024-10-13 14:35:03.183387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.705 [2024-10-13 14:35:03.183540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.705 [2024-10-13 14:35:03.183547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.705 [2024-10-13 14:35:03.183553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.705 [2024-10-13 14:35:03.185996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.705 [2024-10-13 14:35:03.195328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.705 [2024-10-13 14:35:03.195899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.705 [2024-10-13 14:35:03.195930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.705 [2024-10-13 14:35:03.195939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.705 [2024-10-13 14:35:03.196115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.705 [2024-10-13 14:35:03.196270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.705 [2024-10-13 14:35:03.196276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.705 [2024-10-13 14:35:03.196281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.705 [2024-10-13 14:35:03.198719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.705 [2024-10-13 14:35:03.208047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.705 [2024-10-13 14:35:03.208588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.705 [2024-10-13 14:35:03.208619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.705 [2024-10-13 14:35:03.208628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.705 [2024-10-13 14:35:03.208796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.705 [2024-10-13 14:35:03.208949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.705 [2024-10-13 14:35:03.208955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.705 [2024-10-13 14:35:03.208960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.705 [2024-10-13 14:35:03.211407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.705 [2024-10-13 14:35:03.220728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.705 [2024-10-13 14:35:03.221292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.705 [2024-10-13 14:35:03.221323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.705 [2024-10-13 14:35:03.221331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.705 [2024-10-13 14:35:03.221498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.705 [2024-10-13 14:35:03.221651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.705 [2024-10-13 14:35:03.221657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.706 [2024-10-13 14:35:03.221663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.706 [2024-10-13 14:35:03.224106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.706 [2024-10-13 14:35:03.233436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.706 [2024-10-13 14:35:03.234017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.706 [2024-10-13 14:35:03.234048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.706 [2024-10-13 14:35:03.234057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.706 [2024-10-13 14:35:03.234230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.706 [2024-10-13 14:35:03.234384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.706 [2024-10-13 14:35:03.234390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.706 [2024-10-13 14:35:03.234396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.706 [2024-10-13 14:35:03.236833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.706 [2024-10-13 14:35:03.246153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.706 [2024-10-13 14:35:03.246690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.706 [2024-10-13 14:35:03.246720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.706 [2024-10-13 14:35:03.246729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.706 [2024-10-13 14:35:03.246899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.706 [2024-10-13 14:35:03.247053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.706 [2024-10-13 14:35:03.247059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.706 [2024-10-13 14:35:03.247073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.706 [2024-10-13 14:35:03.249518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.706 [2024-10-13 14:35:03.258835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.706 [2024-10-13 14:35:03.259297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.706 [2024-10-13 14:35:03.259312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.706 [2024-10-13 14:35:03.259318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.706 [2024-10-13 14:35:03.259469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.706 [2024-10-13 14:35:03.259620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.706 [2024-10-13 14:35:03.259625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.706 [2024-10-13 14:35:03.259630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.706 [2024-10-13 14:35:03.262065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.706 [2024-10-13 14:35:03.271520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.706 [2024-10-13 14:35:03.272001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.706 [2024-10-13 14:35:03.272013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.706 [2024-10-13 14:35:03.272018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.706 [2024-10-13 14:35:03.272173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.706 [2024-10-13 14:35:03.272324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.706 [2024-10-13 14:35:03.272330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.706 [2024-10-13 14:35:03.272335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.706 [2024-10-13 14:35:03.274763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.706 [2024-10-13 14:35:03.284217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.706 [2024-10-13 14:35:03.284661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.706 [2024-10-13 14:35:03.284672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.706 [2024-10-13 14:35:03.284678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.706 [2024-10-13 14:35:03.284829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.706 [2024-10-13 14:35:03.284979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.706 [2024-10-13 14:35:03.284985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.706 [2024-10-13 14:35:03.284993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.706 [2024-10-13 14:35:03.287429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.706 [2024-10-13 14:35:03.296891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.706 [2024-10-13 14:35:03.297449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.706 [2024-10-13 14:35:03.297479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.706 [2024-10-13 14:35:03.297488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.706 [2024-10-13 14:35:03.297654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.706 [2024-10-13 14:35:03.297808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.706 [2024-10-13 14:35:03.297814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.706 [2024-10-13 14:35:03.297820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.706 [2024-10-13 14:35:03.300265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.706 [2024-10-13 14:35:03.309581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.706 [2024-10-13 14:35:03.310155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.706 [2024-10-13 14:35:03.310186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.706 [2024-10-13 14:35:03.310194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.706 [2024-10-13 14:35:03.310362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.706 [2024-10-13 14:35:03.310516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.706 [2024-10-13 14:35:03.310522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.706 [2024-10-13 14:35:03.310528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.706 [2024-10-13 14:35:03.312971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.706 [2024-10-13 14:35:03.322295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.706 [2024-10-13 14:35:03.322746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.706 [2024-10-13 14:35:03.322761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.706 [2024-10-13 14:35:03.322766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.706 [2024-10-13 14:35:03.322917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.706 [2024-10-13 14:35:03.323073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.706 [2024-10-13 14:35:03.323079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.706 [2024-10-13 14:35:03.323084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.706 [2024-10-13 14:35:03.325517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.706 [2024-10-13 14:35:03.334977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.706 [2024-10-13 14:35:03.335550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.706 [2024-10-13 14:35:03.335580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.706 [2024-10-13 14:35:03.335589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.706 [2024-10-13 14:35:03.335756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.706 [2024-10-13 14:35:03.335909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.706 [2024-10-13 14:35:03.335915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.706 [2024-10-13 14:35:03.335921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.706 [2024-10-13 14:35:03.338365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.706 [2024-10-13 14:35:03.347684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.706 [2024-10-13 14:35:03.348170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.706 [2024-10-13 14:35:03.348186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.706 [2024-10-13 14:35:03.348191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.706 [2024-10-13 14:35:03.348349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.706 [2024-10-13 14:35:03.348501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.706 [2024-10-13 14:35:03.348506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.706 [2024-10-13 14:35:03.348512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.706 [2024-10-13 14:35:03.350946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.706 [2024-10-13 14:35:03.360406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.706 [2024-10-13 14:35:03.360968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.706 [2024-10-13 14:35:03.360999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.706 [2024-10-13 14:35:03.361008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.706 [2024-10-13 14:35:03.361183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.706 [2024-10-13 14:35:03.361337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.707 [2024-10-13 14:35:03.361344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.707 [2024-10-13 14:35:03.361350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.707 [2024-10-13 14:35:03.363786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.707 [2024-10-13 14:35:03.373106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.707 [2024-10-13 14:35:03.373676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.707 [2024-10-13 14:35:03.373706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.707 [2024-10-13 14:35:03.373715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.707 [2024-10-13 14:35:03.373882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.707 [2024-10-13 14:35:03.374039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.707 [2024-10-13 14:35:03.374045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.707 [2024-10-13 14:35:03.374050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.707 [2024-10-13 14:35:03.376495] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.707 [2024-10-13 14:35:03.385816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.707 [2024-10-13 14:35:03.386299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.707 [2024-10-13 14:35:03.386329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.707 [2024-10-13 14:35:03.386338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.707 [2024-10-13 14:35:03.386504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.707 [2024-10-13 14:35:03.386658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.707 [2024-10-13 14:35:03.386664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.707 [2024-10-13 14:35:03.386669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.707 [2024-10-13 14:35:03.389115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.707 [2024-10-13 14:35:03.398433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.707 [2024-10-13 14:35:03.399007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.707 [2024-10-13 14:35:03.399038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.707 [2024-10-13 14:35:03.399047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.707 [2024-10-13 14:35:03.399220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.707 [2024-10-13 14:35:03.399375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.707 [2024-10-13 14:35:03.399381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.707 [2024-10-13 14:35:03.399386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.707 [2024-10-13 14:35:03.401824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.969 [2024-10-13 14:35:03.411160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.969 [2024-10-13 14:35:03.411646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.969 [2024-10-13 14:35:03.411677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.969 [2024-10-13 14:35:03.411685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.969 [2024-10-13 14:35:03.411852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.969 [2024-10-13 14:35:03.412005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.969 [2024-10-13 14:35:03.412012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.969 [2024-10-13 14:35:03.412017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.969 [2024-10-13 14:35:03.414465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.969 [2024-10-13 14:35:03.423794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.969 [2024-10-13 14:35:03.424339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.969 [2024-10-13 14:35:03.424370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.969 [2024-10-13 14:35:03.424378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.969 [2024-10-13 14:35:03.424545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.969 [2024-10-13 14:35:03.424698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.969 [2024-10-13 14:35:03.424704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.969 [2024-10-13 14:35:03.424710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.969 [2024-10-13 14:35:03.427159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.969 [2024-10-13 14:35:03.436483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.969 [2024-10-13 14:35:03.436973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.970 [2024-10-13 14:35:03.437005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.970 [2024-10-13 14:35:03.437013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.970 [2024-10-13 14:35:03.437189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.970 [2024-10-13 14:35:03.437344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.970 [2024-10-13 14:35:03.437351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.970 [2024-10-13 14:35:03.437356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.970 [2024-10-13 14:35:03.439796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.970 [2024-10-13 14:35:03.449142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.970 [2024-10-13 14:35:03.449639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.970 [2024-10-13 14:35:03.449654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.970 [2024-10-13 14:35:03.449660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.970 [2024-10-13 14:35:03.449813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.970 [2024-10-13 14:35:03.449964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.970 [2024-10-13 14:35:03.449969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.970 [2024-10-13 14:35:03.449974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.970 [2024-10-13 14:35:03.452415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.970 [2024-10-13 14:35:03.461739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.970 [2024-10-13 14:35:03.462185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.970 [2024-10-13 14:35:03.462198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.970 [2024-10-13 14:35:03.462207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.970 [2024-10-13 14:35:03.462358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.970 [2024-10-13 14:35:03.462509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.970 [2024-10-13 14:35:03.462514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.970 [2024-10-13 14:35:03.462519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.970 [2024-10-13 14:35:03.464952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.970 [2024-10-13 14:35:03.474418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.970 [2024-10-13 14:35:03.474755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.970 [2024-10-13 14:35:03.474767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.970 [2024-10-13 14:35:03.474773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.970 [2024-10-13 14:35:03.474924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.970 [2024-10-13 14:35:03.475078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.970 [2024-10-13 14:35:03.475084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.970 [2024-10-13 14:35:03.475090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.970 [2024-10-13 14:35:03.477524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.970 [2024-10-13 14:35:03.487146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.970 [2024-10-13 14:35:03.487588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.970 [2024-10-13 14:35:03.487600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.970 [2024-10-13 14:35:03.487605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.970 [2024-10-13 14:35:03.487755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.970 [2024-10-13 14:35:03.487906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.970 [2024-10-13 14:35:03.487912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.970 [2024-10-13 14:35:03.487917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.970 [2024-10-13 14:35:03.490356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.970 [2024-10-13 14:35:03.499832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.970 [2024-10-13 14:35:03.500393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.970 [2024-10-13 14:35:03.500424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.970 [2024-10-13 14:35:03.500433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.970 [2024-10-13 14:35:03.500599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.970 [2024-10-13 14:35:03.500753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.970 [2024-10-13 14:35:03.500762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.970 [2024-10-13 14:35:03.500768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.970 [2024-10-13 14:35:03.503216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.970 [2024-10-13 14:35:03.512556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.970 [2024-10-13 14:35:03.513147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.970 [2024-10-13 14:35:03.513177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.970 [2024-10-13 14:35:03.513186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.970 [2024-10-13 14:35:03.513355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.970 [2024-10-13 14:35:03.513508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.970 [2024-10-13 14:35:03.513515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.970 [2024-10-13 14:35:03.513520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.970 [2024-10-13 14:35:03.515962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.970 [2024-10-13 14:35:03.525146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.970 [2024-10-13 14:35:03.525701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.970 [2024-10-13 14:35:03.525732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.970 [2024-10-13 14:35:03.525741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.970 [2024-10-13 14:35:03.525907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.970 [2024-10-13 14:35:03.526060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.970 [2024-10-13 14:35:03.526074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.970 [2024-10-13 14:35:03.526079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.970 [2024-10-13 14:35:03.528530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.970 [2024-10-13 14:35:03.537857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.970 [2024-10-13 14:35:03.538359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.970 [2024-10-13 14:35:03.538390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.970 [2024-10-13 14:35:03.538399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.970 [2024-10-13 14:35:03.538565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.970 [2024-10-13 14:35:03.538719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.970 [2024-10-13 14:35:03.538728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.970 [2024-10-13 14:35:03.538734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.970 [2024-10-13 14:35:03.541180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.970 [2024-10-13 14:35:03.550521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.970 [2024-10-13 14:35:03.551100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.970 [2024-10-13 14:35:03.551131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.970 [2024-10-13 14:35:03.551139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.970 [2024-10-13 14:35:03.551308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.970 [2024-10-13 14:35:03.551462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.970 [2024-10-13 14:35:03.551468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.970 [2024-10-13 14:35:03.551473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.970 [2024-10-13 14:35:03.553917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.970 [2024-10-13 14:35:03.563134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.970 [2024-10-13 14:35:03.563616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.970 [2024-10-13 14:35:03.563631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.970 [2024-10-13 14:35:03.563636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.970 [2024-10-13 14:35:03.563787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.970 [2024-10-13 14:35:03.563938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.970 [2024-10-13 14:35:03.563944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.970 [2024-10-13 14:35:03.563949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.970 [2024-10-13 14:35:03.566386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.971 [2024-10-13 14:35:03.575847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.971 [2024-10-13 14:35:03.576486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.971 [2024-10-13 14:35:03.576517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.971 [2024-10-13 14:35:03.576526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.971 [2024-10-13 14:35:03.576692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.971 [2024-10-13 14:35:03.576845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.971 [2024-10-13 14:35:03.576852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.971 [2024-10-13 14:35:03.576857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.971 [2024-10-13 14:35:03.579299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.971 [2024-10-13 14:35:03.588481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.971 [2024-10-13 14:35:03.588919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.971 [2024-10-13 14:35:03.588934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.971 [2024-10-13 14:35:03.588943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.971 [2024-10-13 14:35:03.589100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.971 [2024-10-13 14:35:03.589252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.971 [2024-10-13 14:35:03.589258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.971 [2024-10-13 14:35:03.589263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.971 [2024-10-13 14:35:03.591699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.971 [2024-10-13 14:35:03.601184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.971 [2024-10-13 14:35:03.601745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.971 [2024-10-13 14:35:03.601775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.971 [2024-10-13 14:35:03.601784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.971 [2024-10-13 14:35:03.601950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.971 [2024-10-13 14:35:03.602110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.971 [2024-10-13 14:35:03.602117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.971 [2024-10-13 14:35:03.602123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.971 [2024-10-13 14:35:03.604563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.971 [2024-10-13 14:35:03.613899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.971 [2024-10-13 14:35:03.614362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.971 [2024-10-13 14:35:03.614377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.971 [2024-10-13 14:35:03.614382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.971 [2024-10-13 14:35:03.614534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.971 [2024-10-13 14:35:03.614684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.971 [2024-10-13 14:35:03.614690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.971 [2024-10-13 14:35:03.614695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.971 [2024-10-13 14:35:03.617135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.971 [2024-10-13 14:35:03.626615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.971 [2024-10-13 14:35:03.627096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.971 [2024-10-13 14:35:03.627109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.971 [2024-10-13 14:35:03.627115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.971 [2024-10-13 14:35:03.627266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.971 [2024-10-13 14:35:03.627416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.971 [2024-10-13 14:35:03.627426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.971 [2024-10-13 14:35:03.627430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.971 [2024-10-13 14:35:03.629876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.971 [2024-10-13 14:35:03.639217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.971 [2024-10-13 14:35:03.639685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.971 [2024-10-13 14:35:03.639717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.971 [2024-10-13 14:35:03.639725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.971 [2024-10-13 14:35:03.639891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.971 [2024-10-13 14:35:03.640045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.971 [2024-10-13 14:35:03.640052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.971 [2024-10-13 14:35:03.640057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.971 [2024-10-13 14:35:03.642504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.971 [2024-10-13 14:35:03.651842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.971 [2024-10-13 14:35:03.652412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.971 [2024-10-13 14:35:03.652443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.971 [2024-10-13 14:35:03.652452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.971 [2024-10-13 14:35:03.652618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.971 [2024-10-13 14:35:03.652772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.971 [2024-10-13 14:35:03.652778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.971 [2024-10-13 14:35:03.652783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1982341 Killed "${NVMF_APP[@]}" "$@" 00:38:59.971 14:35:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:38:59.971 [2024-10-13 14:35:03.655230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:59.971 14:35:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:59.971 14:35:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:59.971 14:35:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:59.971 14:35:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:59.971 14:35:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1983937 00:38:59.971 14:35:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1983937 00:38:59.971 [2024-10-13 14:35:03.664555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:59.971 14:35:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:59.971 14:35:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1983937 ']' 00:38:59.971 [2024-10-13 14:35:03.665035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:59.971 [2024-10-13 14:35:03.665054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:38:59.971 [2024-10-13 14:35:03.665060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:38:59.971 14:35:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:59.971 [2024-10-13 14:35:03.665219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:38:59.971 14:35:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:59.971 [2024-10-13 14:35:03.665370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:59.971 [2024-10-13 14:35:03.665377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:59.971 [2024-10-13 14:35:03.665382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:59.971 14:35:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:59.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:59.971 14:35:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:59.971 14:35:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:59.971 [2024-10-13 14:35:03.667818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.234 [2024-10-13 14:35:03.677146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.234 [2024-10-13 14:35:03.677592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.234 [2024-10-13 14:35:03.677623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.234 [2024-10-13 14:35:03.677632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.234 [2024-10-13 14:35:03.677798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.234 [2024-10-13 14:35:03.677952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.234 [2024-10-13 14:35:03.677958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.234 [2024-10-13 14:35:03.677964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.234 [2024-10-13 14:35:03.680407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.234 [2024-10-13 14:35:03.689736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.234 [2024-10-13 14:35:03.690370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.234 [2024-10-13 14:35:03.690402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.234 [2024-10-13 14:35:03.690411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.234 [2024-10-13 14:35:03.690577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.234 [2024-10-13 14:35:03.690731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.234 [2024-10-13 14:35:03.690737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.234 [2024-10-13 14:35:03.690743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.234 [2024-10-13 14:35:03.693186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.234 [2024-10-13 14:35:03.702378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.234 [2024-10-13 14:35:03.702883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.234 [2024-10-13 14:35:03.702898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.234 [2024-10-13 14:35:03.702905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.234 [2024-10-13 14:35:03.703056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.234 [2024-10-13 14:35:03.703214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.234 [2024-10-13 14:35:03.703220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.234 [2024-10-13 14:35:03.703225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.234 [2024-10-13 14:35:03.705660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.234 [2024-10-13 14:35:03.714985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.234 [2024-10-13 14:35:03.715494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.234 [2024-10-13 14:35:03.715507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.234 [2024-10-13 14:35:03.715513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.234 [2024-10-13 14:35:03.715664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.234 [2024-10-13 14:35:03.715815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.234 [2024-10-13 14:35:03.715820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.234 [2024-10-13 14:35:03.715825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.234 [2024-10-13 14:35:03.718259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.234 [2024-10-13 14:35:03.718767] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:39:00.234 [2024-10-13 14:35:03.718812] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:00.234 [2024-10-13 14:35:03.727622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.234 [2024-10-13 14:35:03.728297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.234 [2024-10-13 14:35:03.728328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.234 [2024-10-13 14:35:03.728337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.234 [2024-10-13 14:35:03.728504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.234 [2024-10-13 14:35:03.728657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.234 [2024-10-13 14:35:03.728664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.234 [2024-10-13 14:35:03.728669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.234 [2024-10-13 14:35:03.731126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.234 [2024-10-13 14:35:03.740327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.234 [2024-10-13 14:35:03.740817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.234 [2024-10-13 14:35:03.740832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.234 [2024-10-13 14:35:03.740838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.234 [2024-10-13 14:35:03.740989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.234 [2024-10-13 14:35:03.741145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.234 [2024-10-13 14:35:03.741151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.234 [2024-10-13 14:35:03.741156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.234 [2024-10-13 14:35:03.743589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.234 [2024-10-13 14:35:03.752926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.234 [2024-10-13 14:35:03.753399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.234 [2024-10-13 14:35:03.753412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.234 [2024-10-13 14:35:03.753418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.234 [2024-10-13 14:35:03.753568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.234 [2024-10-13 14:35:03.753719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.234 [2024-10-13 14:35:03.753724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.234 [2024-10-13 14:35:03.753729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.234 [2024-10-13 14:35:03.756173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.234 [2024-10-13 14:35:03.765640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.234 [2024-10-13 14:35:03.766105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.234 [2024-10-13 14:35:03.766125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.234 [2024-10-13 14:35:03.766131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.234 [2024-10-13 14:35:03.766288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.234 [2024-10-13 14:35:03.766440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.234 [2024-10-13 14:35:03.766446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.234 [2024-10-13 14:35:03.766451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.234 [2024-10-13 14:35:03.768888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.234 [2024-10-13 14:35:03.778360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.234 [2024-10-13 14:35:03.778735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.234 [2024-10-13 14:35:03.778748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.234 [2024-10-13 14:35:03.778754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.235 [2024-10-13 14:35:03.778909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.235 [2024-10-13 14:35:03.779060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.235 [2024-10-13 14:35:03.779072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.235 [2024-10-13 14:35:03.779077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.235 [2024-10-13 14:35:03.781513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.235 [2024-10-13 14:35:03.791024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.235 [2024-10-13 14:35:03.791621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.235 [2024-10-13 14:35:03.791652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.235 [2024-10-13 14:35:03.791661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.235 [2024-10-13 14:35:03.791828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.235 [2024-10-13 14:35:03.791981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.235 [2024-10-13 14:35:03.791988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.235 [2024-10-13 14:35:03.791993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.235 [2024-10-13 14:35:03.794439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.235 [2024-10-13 14:35:03.803626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.235 [2024-10-13 14:35:03.804174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.235 [2024-10-13 14:35:03.804205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.235 [2024-10-13 14:35:03.804214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.235 [2024-10-13 14:35:03.804383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.235 [2024-10-13 14:35:03.804537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.235 [2024-10-13 14:35:03.804544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.235 [2024-10-13 14:35:03.804549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.235 [2024-10-13 14:35:03.806993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.235 [2024-10-13 14:35:03.816327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.235 [2024-10-13 14:35:03.816896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.235 [2024-10-13 14:35:03.816927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.235 [2024-10-13 14:35:03.816936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.235 [2024-10-13 14:35:03.817108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.235 [2024-10-13 14:35:03.817263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.235 [2024-10-13 14:35:03.817269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.235 [2024-10-13 14:35:03.817279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.235 [2024-10-13 14:35:03.819717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.235 [2024-10-13 14:35:03.829041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.235 [2024-10-13 14:35:03.829496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.235 [2024-10-13 14:35:03.829511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.235 [2024-10-13 14:35:03.829517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.235 [2024-10-13 14:35:03.829669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.235 [2024-10-13 14:35:03.829819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.235 [2024-10-13 14:35:03.829826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.235 [2024-10-13 14:35:03.829831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.235 [2024-10-13 14:35:03.832280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.235 [2024-10-13 14:35:03.841749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.235 [2024-10-13 14:35:03.842298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.235 [2024-10-13 14:35:03.842329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.235 [2024-10-13 14:35:03.842338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.235 [2024-10-13 14:35:03.842504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.235 [2024-10-13 14:35:03.842658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.235 [2024-10-13 14:35:03.842664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.235 [2024-10-13 14:35:03.842670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.235 [2024-10-13 14:35:03.845116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.235 [2024-10-13 14:35:03.854455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.235 [2024-10-13 14:35:03.855029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.235 [2024-10-13 14:35:03.855060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.235 [2024-10-13 14:35:03.855074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.235 [2024-10-13 14:35:03.855242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.235 [2024-10-13 14:35:03.855396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.235 [2024-10-13 14:35:03.855402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.235 [2024-10-13 14:35:03.855408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.235 [2024-10-13 14:35:03.856342] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:00.235 [2024-10-13 14:35:03.857934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.235 [2024-10-13 14:35:03.867134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.235 [2024-10-13 14:35:03.867679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.235 [2024-10-13 14:35:03.867709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.235 [2024-10-13 14:35:03.867718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.235 [2024-10-13 14:35:03.867885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.235 [2024-10-13 14:35:03.868038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.235 [2024-10-13 14:35:03.868045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.235 [2024-10-13 14:35:03.868051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.235 [2024-10-13 14:35:03.870496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.235 [2024-10-13 14:35:03.879823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.235 [2024-10-13 14:35:03.880322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.235 [2024-10-13 14:35:03.880337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.235 [2024-10-13 14:35:03.880343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.235 [2024-10-13 14:35:03.880494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.235 [2024-10-13 14:35:03.880645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.235 [2024-10-13 14:35:03.880651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.235 [2024-10-13 14:35:03.880656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.235 [2024-10-13 14:35:03.883094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.235 [2024-10-13 14:35:03.892418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.235 [2024-10-13 14:35:03.892981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.235 [2024-10-13 14:35:03.893013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.235 [2024-10-13 14:35:03.893022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.235 [2024-10-13 14:35:03.893194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.235 [2024-10-13 14:35:03.893347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.235 [2024-10-13 14:35:03.893354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.235 [2024-10-13 14:35:03.893359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.235 [2024-10-13 14:35:03.895795] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.235 [2024-10-13 14:35:03.904578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:00.235 [2024-10-13 14:35:03.905125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.235 [2024-10-13 14:35:03.905623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.235 [2024-10-13 14:35:03.905638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.235 [2024-10-13 14:35:03.905647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.235 [2024-10-13 14:35:03.905798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.235 [2024-10-13 14:35:03.905950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.235 [2024-10-13 14:35:03.905956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.235 [2024-10-13 14:35:03.905961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.235 [2024-10-13 14:35:03.908401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.236 [2024-10-13 14:35:03.917731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.236 [2024-10-13 14:35:03.918363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.236 [2024-10-13 14:35:03.918400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.236 [2024-10-13 14:35:03.918409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.236 [2024-10-13 14:35:03.918579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.236 [2024-10-13 14:35:03.918734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.236 [2024-10-13 14:35:03.918740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.236 [2024-10-13 14:35:03.918745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.236 [2024-10-13 14:35:03.920401] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:00.236 [2024-10-13 14:35:03.920422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:00.236 [2024-10-13 14:35:03.920429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:00.236 [2024-10-13 14:35:03.920435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:00.236 [2024-10-13 14:35:03.920440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:00.236 [2024-10-13 14:35:03.921195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.236 [2024-10-13 14:35:03.921704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:00.236 [2024-10-13 14:35:03.921854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:00.236 [2024-10-13 14:35:03.921856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:00.236 [2024-10-13 14:35:03.930399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.236 [2024-10-13 14:35:03.931005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.236 [2024-10-13 14:35:03.931038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.236 [2024-10-13 14:35:03.931048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.236 [2024-10-13 14:35:03.931227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.236 [2024-10-13 14:35:03.931383] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.236 [2024-10-13 14:35:03.931389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.236 [2024-10-13 14:35:03.931395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.236 [2024-10-13 14:35:03.933834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.498 [2024-10-13 14:35:03.943031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.498 [2024-10-13 14:35:03.943664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.498 [2024-10-13 14:35:03.943697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.498 [2024-10-13 14:35:03.943707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.498 [2024-10-13 14:35:03.943875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.498 [2024-10-13 14:35:03.944030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.498 [2024-10-13 14:35:03.944036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.498 [2024-10-13 14:35:03.944042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.498 [2024-10-13 14:35:03.946494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.498 [2024-10-13 14:35:03.955700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.498 [2024-10-13 14:35:03.956184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.498 [2024-10-13 14:35:03.956217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.498 [2024-10-13 14:35:03.956226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.498 [2024-10-13 14:35:03.956396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.498 [2024-10-13 14:35:03.956550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.498 [2024-10-13 14:35:03.956557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.498 [2024-10-13 14:35:03.956563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.498 [2024-10-13 14:35:03.959008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.498 [2024-10-13 14:35:03.968344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.498 [2024-10-13 14:35:03.968935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.498 [2024-10-13 14:35:03.968967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.498 [2024-10-13 14:35:03.968976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.498 [2024-10-13 14:35:03.969148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.498 [2024-10-13 14:35:03.969303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.498 [2024-10-13 14:35:03.969310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.498 [2024-10-13 14:35:03.969316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.498 [2024-10-13 14:35:03.971756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.498 [2024-10-13 14:35:03.980940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.498 [2024-10-13 14:35:03.981589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.498 [2024-10-13 14:35:03.981620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.498 [2024-10-13 14:35:03.981634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.498 [2024-10-13 14:35:03.981801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.498 [2024-10-13 14:35:03.981955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.498 [2024-10-13 14:35:03.981961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.498 [2024-10-13 14:35:03.981967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.499 [2024-10-13 14:35:03.984413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.499 [2024-10-13 14:35:03.993603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.499 [2024-10-13 14:35:03.994169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.499 [2024-10-13 14:35:03.994200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.499 [2024-10-13 14:35:03.994209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.499 [2024-10-13 14:35:03.994378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.499 [2024-10-13 14:35:03.994532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.499 [2024-10-13 14:35:03.994538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.499 [2024-10-13 14:35:03.994544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.499 [2024-10-13 14:35:03.996989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.499 [2024-10-13 14:35:04.006322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.499 [2024-10-13 14:35:04.006835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.499 [2024-10-13 14:35:04.006849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.499 [2024-10-13 14:35:04.006855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.499 [2024-10-13 14:35:04.007006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.499 [2024-10-13 14:35:04.007161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.499 [2024-10-13 14:35:04.007167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.499 [2024-10-13 14:35:04.007172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.499 [2024-10-13 14:35:04.009607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.499 [2024-10-13 14:35:04.018929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.499 [2024-10-13 14:35:04.019401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.499 [2024-10-13 14:35:04.019415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.499 [2024-10-13 14:35:04.019421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.499 [2024-10-13 14:35:04.019572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.499 [2024-10-13 14:35:04.019723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.499 [2024-10-13 14:35:04.019732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.499 [2024-10-13 14:35:04.019738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.499 [2024-10-13 14:35:04.022179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.499 [2024-10-13 14:35:04.031662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.499 [2024-10-13 14:35:04.032157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.499 [2024-10-13 14:35:04.032171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.499 [2024-10-13 14:35:04.032176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.499 [2024-10-13 14:35:04.032327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.499 [2024-10-13 14:35:04.032478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.499 [2024-10-13 14:35:04.032484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.499 [2024-10-13 14:35:04.032489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.499 [2024-10-13 14:35:04.034920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.499 4656.50 IOPS, 18.19 MiB/s [2024-10-13T12:35:04.206Z] [2024-10-13 14:35:04.044290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.499 [2024-10-13 14:35:04.044794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.499 [2024-10-13 14:35:04.044807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.499 [2024-10-13 14:35:04.044812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.499 [2024-10-13 14:35:04.044963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.499 [2024-10-13 14:35:04.045118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.499 [2024-10-13 14:35:04.045124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.499 [2024-10-13 14:35:04.045129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.499 [2024-10-13 14:35:04.047563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.499 [2024-10-13 14:35:04.056898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.499 [2024-10-13 14:35:04.057354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.499 [2024-10-13 14:35:04.057367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.499 [2024-10-13 14:35:04.057372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.499 [2024-10-13 14:35:04.057523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.499 [2024-10-13 14:35:04.057673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.499 [2024-10-13 14:35:04.057679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.499 [2024-10-13 14:35:04.057684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.499 [2024-10-13 14:35:04.060119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.499 [2024-10-13 14:35:04.069594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.499 [2024-10-13 14:35:04.070090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.499 [2024-10-13 14:35:04.070103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.499 [2024-10-13 14:35:04.070108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.499 [2024-10-13 14:35:04.070259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.499 [2024-10-13 14:35:04.070409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.499 [2024-10-13 14:35:04.070415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.499 [2024-10-13 14:35:04.070420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.499 [2024-10-13 14:35:04.072852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.499 [2024-10-13 14:35:04.082181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.499 [2024-10-13 14:35:04.082516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.499 [2024-10-13 14:35:04.082528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.499 [2024-10-13 14:35:04.082533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.499 [2024-10-13 14:35:04.082683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.499 [2024-10-13 14:35:04.082834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.499 [2024-10-13 14:35:04.082839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.499 [2024-10-13 14:35:04.082844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.499 [2024-10-13 14:35:04.085281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.499 [2024-10-13 14:35:04.094891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.499 [2024-10-13 14:35:04.095392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.499 [2024-10-13 14:35:04.095404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.499 [2024-10-13 14:35:04.095410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.499 [2024-10-13 14:35:04.095560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.499 [2024-10-13 14:35:04.095711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.499 [2024-10-13 14:35:04.095716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.499 [2024-10-13 14:35:04.095721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.499 [2024-10-13 14:35:04.098160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.499 [2024-10-13 14:35:04.107484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.499 [2024-10-13 14:35:04.107847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.499 [2024-10-13 14:35:04.107858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.499 [2024-10-13 14:35:04.107864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.499 [2024-10-13 14:35:04.108017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.499 [2024-10-13 14:35:04.108174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.499 [2024-10-13 14:35:04.108180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.499 [2024-10-13 14:35:04.108185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.499 [2024-10-13 14:35:04.110618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.499 [2024-10-13 14:35:04.120088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.499 [2024-10-13 14:35:04.120648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.499 [2024-10-13 14:35:04.120679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.499 [2024-10-13 14:35:04.120687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.499 [2024-10-13 14:35:04.120854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.499 [2024-10-13 14:35:04.121008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.500 [2024-10-13 14:35:04.121014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.500 [2024-10-13 14:35:04.121019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.500 [2024-10-13 14:35:04.123466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.500 [2024-10-13 14:35:04.132808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.500 [2024-10-13 14:35:04.133201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.500 [2024-10-13 14:35:04.133231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.500 [2024-10-13 14:35:04.133240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.500 [2024-10-13 14:35:04.133408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.500 [2024-10-13 14:35:04.133562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.500 [2024-10-13 14:35:04.133568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.500 [2024-10-13 14:35:04.133574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.500 [2024-10-13 14:35:04.136021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.500 [2024-10-13 14:35:04.145501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.500 [2024-10-13 14:35:04.145828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.500 [2024-10-13 14:35:04.145843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.500 [2024-10-13 14:35:04.145849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.500 [2024-10-13 14:35:04.146001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.500 [2024-10-13 14:35:04.146157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.500 [2024-10-13 14:35:04.146164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.500 [2024-10-13 14:35:04.146175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.500 [2024-10-13 14:35:04.148618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.500 [2024-10-13 14:35:04.158097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.500 [2024-10-13 14:35:04.158327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.500 [2024-10-13 14:35:04.158353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.500 [2024-10-13 14:35:04.158359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.500 [2024-10-13 14:35:04.158516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.500 [2024-10-13 14:35:04.158667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.500 [2024-10-13 14:35:04.158673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.500 [2024-10-13 14:35:04.158678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.500 [2024-10-13 14:35:04.161118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.500 [2024-10-13 14:35:04.170727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.500 [2024-10-13 14:35:04.171154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.500 [2024-10-13 14:35:04.171185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.500 [2024-10-13 14:35:04.171194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.500 [2024-10-13 14:35:04.171362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.500 [2024-10-13 14:35:04.171516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.500 [2024-10-13 14:35:04.171522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.500 [2024-10-13 14:35:04.171528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.500 [2024-10-13 14:35:04.173972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.500 [2024-10-13 14:35:04.183445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.500 [2024-10-13 14:35:04.184037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.500 [2024-10-13 14:35:04.184073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.500 [2024-10-13 14:35:04.184083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.500 [2024-10-13 14:35:04.184249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.500 [2024-10-13 14:35:04.184403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.500 [2024-10-13 14:35:04.184409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.500 [2024-10-13 14:35:04.184414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.500 [2024-10-13 14:35:04.186852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.500 [2024-10-13 14:35:04.196035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.500 [2024-10-13 14:35:04.196544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.500 [2024-10-13 14:35:04.196574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.500 [2024-10-13 14:35:04.196583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.500 [2024-10-13 14:35:04.196750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.500 [2024-10-13 14:35:04.196903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.500 [2024-10-13 14:35:04.196910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.500 [2024-10-13 14:35:04.196915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.500 [2024-10-13 14:35:04.199361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.763 [2024-10-13 14:35:04.208691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.763 [2024-10-13 14:35:04.209181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.763 [2024-10-13 14:35:04.209196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.763 [2024-10-13 14:35:04.209201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.763 [2024-10-13 14:35:04.209352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.763 [2024-10-13 14:35:04.209504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.763 [2024-10-13 14:35:04.209509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.763 [2024-10-13 14:35:04.209514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.763 [2024-10-13 14:35:04.211949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.763 [2024-10-13 14:35:04.221419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.763 [2024-10-13 14:35:04.221985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.763 [2024-10-13 14:35:04.222016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.763 [2024-10-13 14:35:04.222025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.763 [2024-10-13 14:35:04.222200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.763 [2024-10-13 14:35:04.222354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.763 [2024-10-13 14:35:04.222361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.763 [2024-10-13 14:35:04.222366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.763 [2024-10-13 14:35:04.224805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.763 [2024-10-13 14:35:04.234143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.763 [2024-10-13 14:35:04.234723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.763 [2024-10-13 14:35:04.234755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.763 [2024-10-13 14:35:04.234763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.763 [2024-10-13 14:35:04.234930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.763 [2024-10-13 14:35:04.235092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.763 [2024-10-13 14:35:04.235100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.763 [2024-10-13 14:35:04.235105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.763 [2024-10-13 14:35:04.237544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.763 [2024-10-13 14:35:04.246732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.763 [2024-10-13 14:35:04.247338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.763 [2024-10-13 14:35:04.247370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.763 [2024-10-13 14:35:04.247378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.763 [2024-10-13 14:35:04.247545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.763 [2024-10-13 14:35:04.247698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.763 [2024-10-13 14:35:04.247704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.763 [2024-10-13 14:35:04.247710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.763 [2024-10-13 14:35:04.250161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.763 [2024-10-13 14:35:04.259348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.763 [2024-10-13 14:35:04.259807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.763 [2024-10-13 14:35:04.259838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.763 [2024-10-13 14:35:04.259847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.763 [2024-10-13 14:35:04.260013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.763 [2024-10-13 14:35:04.260173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.763 [2024-10-13 14:35:04.260180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.763 [2024-10-13 14:35:04.260186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.763 [2024-10-13 14:35:04.262624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.763 [2024-10-13 14:35:04.271947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.763 [2024-10-13 14:35:04.272506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.763 [2024-10-13 14:35:04.272537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.763 [2024-10-13 14:35:04.272546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.763 [2024-10-13 14:35:04.272713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.763 [2024-10-13 14:35:04.272867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.763 [2024-10-13 14:35:04.272873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.763 [2024-10-13 14:35:04.272878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.763 [2024-10-13 14:35:04.275327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.763 [2024-10-13 14:35:04.284650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.763 [2024-10-13 14:35:04.285244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.763 [2024-10-13 14:35:04.285275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.763 [2024-10-13 14:35:04.285284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.763 [2024-10-13 14:35:04.285450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.763 [2024-10-13 14:35:04.285604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.763 [2024-10-13 14:35:04.285610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.763 [2024-10-13 14:35:04.285615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.763 [2024-10-13 14:35:04.288057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.763 [2024-10-13 14:35:04.297237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.763 [2024-10-13 14:35:04.297832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.763 [2024-10-13 14:35:04.297863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.763 [2024-10-13 14:35:04.297872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.763 [2024-10-13 14:35:04.298039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.763 [2024-10-13 14:35:04.298199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.763 [2024-10-13 14:35:04.298206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.763 [2024-10-13 14:35:04.298211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.763 [2024-10-13 14:35:04.300648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.764 [2024-10-13 14:35:04.309820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.764 [2024-10-13 14:35:04.310328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.764 [2024-10-13 14:35:04.310343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.764 [2024-10-13 14:35:04.310349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.764 [2024-10-13 14:35:04.310500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.764 [2024-10-13 14:35:04.310651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.764 [2024-10-13 14:35:04.310656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.764 [2024-10-13 14:35:04.310661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.764 [2024-10-13 14:35:04.313096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.764 [2024-10-13 14:35:04.322414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.764 [2024-10-13 14:35:04.322955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.764 [2024-10-13 14:35:04.322989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.764 [2024-10-13 14:35:04.322998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.764 [2024-10-13 14:35:04.323171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.764 [2024-10-13 14:35:04.323324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.764 [2024-10-13 14:35:04.323330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.764 [2024-10-13 14:35:04.323336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.764 [2024-10-13 14:35:04.325770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.764 [2024-10-13 14:35:04.335103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.764 [2024-10-13 14:35:04.335648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.764 [2024-10-13 14:35:04.335679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.764 [2024-10-13 14:35:04.335687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.764 [2024-10-13 14:35:04.335854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.764 [2024-10-13 14:35:04.336007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.764 [2024-10-13 14:35:04.336014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.764 [2024-10-13 14:35:04.336019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.764 [2024-10-13 14:35:04.338463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.764 [2024-10-13 14:35:04.347783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.764 [2024-10-13 14:35:04.348374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.764 [2024-10-13 14:35:04.348405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.764 [2024-10-13 14:35:04.348414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.764 [2024-10-13 14:35:04.348581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.764 [2024-10-13 14:35:04.348742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.764 [2024-10-13 14:35:04.348749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.764 [2024-10-13 14:35:04.348755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.764 [2024-10-13 14:35:04.351199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.764 [2024-10-13 14:35:04.360388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.764 [2024-10-13 14:35:04.360866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.764 [2024-10-13 14:35:04.360897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.764 [2024-10-13 14:35:04.360906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.764 [2024-10-13 14:35:04.361078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.764 [2024-10-13 14:35:04.361236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.764 [2024-10-13 14:35:04.361243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.764 [2024-10-13 14:35:04.361248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.764 [2024-10-13 14:35:04.363687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.764 [2024-10-13 14:35:04.373040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.764 [2024-10-13 14:35:04.373671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.764 [2024-10-13 14:35:04.373702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.764 [2024-10-13 14:35:04.373711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.764 [2024-10-13 14:35:04.373877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.764 [2024-10-13 14:35:04.374031] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.764 [2024-10-13 14:35:04.374037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.764 [2024-10-13 14:35:04.374043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.764 [2024-10-13 14:35:04.376489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.764 [2024-10-13 14:35:04.385669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.764 [2024-10-13 14:35:04.386192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.764 [2024-10-13 14:35:04.386223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.764 [2024-10-13 14:35:04.386233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.764 [2024-10-13 14:35:04.386401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.764 [2024-10-13 14:35:04.386555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.764 [2024-10-13 14:35:04.386561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.764 [2024-10-13 14:35:04.386567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.764 [2024-10-13 14:35:04.389010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.764 [2024-10-13 14:35:04.398341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.764 [2024-10-13 14:35:04.398922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.764 [2024-10-13 14:35:04.398953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.764 [2024-10-13 14:35:04.398962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.764 [2024-10-13 14:35:04.399134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.764 [2024-10-13 14:35:04.399288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.764 [2024-10-13 14:35:04.399295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.764 [2024-10-13 14:35:04.399300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.764 [2024-10-13 14:35:04.401737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.764 [2024-10-13 14:35:04.411068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.764 [2024-10-13 14:35:04.411651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.764 [2024-10-13 14:35:04.411682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.764 [2024-10-13 14:35:04.411691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.764 [2024-10-13 14:35:04.411858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.764 [2024-10-13 14:35:04.412011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.764 [2024-10-13 14:35:04.412018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.764 [2024-10-13 14:35:04.412023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.764 [2024-10-13 14:35:04.414467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.764 [2024-10-13 14:35:04.423789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.764 [2024-10-13 14:35:04.424282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.764 [2024-10-13 14:35:04.424297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.764 [2024-10-13 14:35:04.424302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.764 [2024-10-13 14:35:04.424453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.764 [2024-10-13 14:35:04.424604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.764 [2024-10-13 14:35:04.424610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.764 [2024-10-13 14:35:04.424615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.764 [2024-10-13 14:35:04.427047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.764 [2024-10-13 14:35:04.436373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.764 [2024-10-13 14:35:04.436866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.764 [2024-10-13 14:35:04.436878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.764 [2024-10-13 14:35:04.436884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.764 [2024-10-13 14:35:04.437034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.764 [2024-10-13 14:35:04.437191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.764 [2024-10-13 14:35:04.437198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.764 [2024-10-13 14:35:04.437203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.765 [2024-10-13 14:35:04.439634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.765 [2024-10-13 14:35:04.449105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.765 [2024-10-13 14:35:04.449664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.765 [2024-10-13 14:35:04.449695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.765 [2024-10-13 14:35:04.449710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.765 [2024-10-13 14:35:04.449878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.765 [2024-10-13 14:35:04.450033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.765 [2024-10-13 14:35:04.450040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.765 [2024-10-13 14:35:04.450047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.765 [2024-10-13 14:35:04.452493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:00.765 [2024-10-13 14:35:04.461818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:00.765 [2024-10-13 14:35:04.462402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:00.765 [2024-10-13 14:35:04.462433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:00.765 [2024-10-13 14:35:04.462443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:00.765 [2024-10-13 14:35:04.462611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:00.765 [2024-10-13 14:35:04.462765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:00.765 [2024-10-13 14:35:04.462772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:00.765 [2024-10-13 14:35:04.462777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:00.765 [2024-10-13 14:35:04.465223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.027 [2024-10-13 14:35:04.474411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.027 [2024-10-13 14:35:04.474849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.027 [2024-10-13 14:35:04.474880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:01.027 [2024-10-13 14:35:04.474889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:01.027 [2024-10-13 14:35:04.475056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:01.027 [2024-10-13 14:35:04.475217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.027 [2024-10-13 14:35:04.475224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.027 [2024-10-13 14:35:04.475231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.027 [2024-10-13 14:35:04.477668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.027 [2024-10-13 14:35:04.487135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.027 [2024-10-13 14:35:04.487735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.027 [2024-10-13 14:35:04.487765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:01.027 [2024-10-13 14:35:04.487774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:01.027 [2024-10-13 14:35:04.487940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:01.027 [2024-10-13 14:35:04.488100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.027 [2024-10-13 14:35:04.488111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.027 [2024-10-13 14:35:04.488117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.027 [2024-10-13 14:35:04.490555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.027 [2024-10-13 14:35:04.499733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.027 [2024-10-13 14:35:04.500201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.027 [2024-10-13 14:35:04.500233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:01.027 [2024-10-13 14:35:04.500242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:01.027 [2024-10-13 14:35:04.500411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:01.027 [2024-10-13 14:35:04.500564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.027 [2024-10-13 14:35:04.500570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.027 [2024-10-13 14:35:04.500576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.027 [2024-10-13 14:35:04.503017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.027 [2024-10-13 14:35:04.512338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.027 [2024-10-13 14:35:04.512839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.027 [2024-10-13 14:35:04.512853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:01.027 [2024-10-13 14:35:04.512858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:01.027 [2024-10-13 14:35:04.513009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:01.027 [2024-10-13 14:35:04.513165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.027 [2024-10-13 14:35:04.513172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.027 [2024-10-13 14:35:04.513177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.027 [2024-10-13 14:35:04.515609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.027 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:01.027 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:39:01.027 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:01.027 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:01.027 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:01.027 [2024-10-13 14:35:04.524931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.027 [2024-10-13 14:35:04.525413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.027 [2024-10-13 14:35:04.525444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:01.027 [2024-10-13 14:35:04.525453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:01.027 [2024-10-13 14:35:04.525619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:01.027 [2024-10-13 14:35:04.525774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.027 [2024-10-13 14:35:04.525785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.027 [2024-10-13 14:35:04.525790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.027 [2024-10-13 14:35:04.528237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.027 [2024-10-13 14:35:04.537573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.027 [2024-10-13 14:35:04.538076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.027 [2024-10-13 14:35:04.538092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:01.027 [2024-10-13 14:35:04.538097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:01.027 [2024-10-13 14:35:04.538249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:01.027 [2024-10-13 14:35:04.538400] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.027 [2024-10-13 14:35:04.538406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.027 [2024-10-13 14:35:04.538411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.027 [2024-10-13 14:35:04.540844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.027 [2024-10-13 14:35:04.550175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.027 [2024-10-13 14:35:04.550641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.028 [2024-10-13 14:35:04.550654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:01.028 [2024-10-13 14:35:04.550659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:01.028 [2024-10-13 14:35:04.550810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:01.028 [2024-10-13 14:35:04.550960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.028 [2024-10-13 14:35:04.550968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.028 [2024-10-13 14:35:04.550973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.028 [2024-10-13 14:35:04.553441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:01.028 [2024-10-13 14:35:04.562560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:01.028 [2024-10-13 14:35:04.562764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.028 [2024-10-13 14:35:04.563247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.028 [2024-10-13 14:35:04.563260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:01.028 [2024-10-13 14:35:04.563265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:01.028 [2024-10-13 14:35:04.563416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:01.028 [2024-10-13 14:35:04.563570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.028 [2024-10-13 14:35:04.563576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.028 [2024-10-13 14:35:04.563581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.028 [2024-10-13 14:35:04.566014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:01.028 [2024-10-13 14:35:04.575481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.028 [2024-10-13 14:35:04.575930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.028 [2024-10-13 14:35:04.575942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:01.028 [2024-10-13 14:35:04.575947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:01.028 [2024-10-13 14:35:04.576102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:01.028 [2024-10-13 14:35:04.576253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.028 [2024-10-13 14:35:04.576259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.028 [2024-10-13 14:35:04.576264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.028 [2024-10-13 14:35:04.578694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.028 [2024-10-13 14:35:04.588153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.028 [2024-10-13 14:35:04.588706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.028 [2024-10-13 14:35:04.588738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:01.028 [2024-10-13 14:35:04.588747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:01.028 [2024-10-13 14:35:04.588914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:01.028 [2024-10-13 14:35:04.589075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.028 [2024-10-13 14:35:04.589083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.028 [2024-10-13 14:35:04.589088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.028 [2024-10-13 14:35:04.591526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.028 Malloc0 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:01.028 [2024-10-13 14:35:04.600848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.028 [2024-10-13 14:35:04.601306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.028 [2024-10-13 14:35:04.601337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:01.028 [2024-10-13 14:35:04.601349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:01.028 [2024-10-13 14:35:04.601516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:01.028 [2024-10-13 14:35:04.601670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.028 [2024-10-13 14:35:04.601676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.028 [2024-10-13 14:35:04.601681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.028 [2024-10-13 14:35:04.604125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:01.028 [2024-10-13 14:35:04.613445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.028 [2024-10-13 14:35:04.613910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:01.028 [2024-10-13 14:35:04.613925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182edc0 with addr=10.0.0.2, port=4420 00:39:01.028 [2024-10-13 14:35:04.613930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182edc0 is same with the state(6) to be set 00:39:01.028 [2024-10-13 14:35:04.614087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182edc0 (9): Bad file descriptor 00:39:01.028 [2024-10-13 14:35:04.614239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:01.028 [2024-10-13 14:35:04.614244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:01.028 [2024-10-13 14:35:04.614249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:01.028 [2024-10-13 14:35:04.616680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:01.028 [2024-10-13 14:35:04.625715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:01.028 [2024-10-13 14:35:04.626141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.028 14:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1982848 00:39:01.028 [2024-10-13 14:35:04.708912] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:02.540 4619.43 IOPS, 18.04 MiB/s [2024-10-13T12:35:07.188Z] 5645.50 IOPS, 22.05 MiB/s [2024-10-13T12:35:08.132Z] 6463.33 IOPS, 25.25 MiB/s [2024-10-13T12:35:09.072Z] 7109.40 IOPS, 27.77 MiB/s [2024-10-13T12:35:10.458Z] 7636.82 IOPS, 29.83 MiB/s [2024-10-13T12:35:11.400Z] 8072.33 IOPS, 31.53 MiB/s [2024-10-13T12:35:12.340Z] 8445.69 IOPS, 32.99 MiB/s [2024-10-13T12:35:13.282Z] 8774.50 IOPS, 34.28 MiB/s 00:39:09.575 Latency(us) 00:39:09.575 [2024-10-13T12:35:13.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:09.575 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:09.575 Verification LBA range: start 0x0 length 0x4000 00:39:09.575 Nvme1n1 : 15.00 9043.01 35.32 13322.43 0.00 5703.77 557.67 15546.46 00:39:09.575 [2024-10-13T12:35:13.282Z] =================================================================================================================== 00:39:09.575 [2024-10-13T12:35:13.282Z] Total : 9043.01 35.32 13322.43 0.00 5703.77 557.67 15546.46 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:09.575 rmmod nvme_tcp 00:39:09.575 rmmod nvme_fabrics 00:39:09.575 rmmod nvme_keyring 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 1983937 ']' 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 1983937 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1983937 ']' 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1983937 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:09.575 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1983937 00:39:09.836 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:09.836 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:09.836 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1983937' 00:39:09.836 killing process with pid 1983937 00:39:09.836 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1983937 00:39:09.836 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1983937 00:39:09.836 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:09.836 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:09.836 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:09.836 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:39:09.836 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:39:09.836 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:09.836 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:39:09.836 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:09.836 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:09.836 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:09.836 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:09.836 14:35:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:12.379 00:39:12.379 real 0m28.434s 00:39:12.379 user 1m2.880s 00:39:12.379 sys 0m7.775s 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:12.379 ************************************ 00:39:12.379 END TEST nvmf_bdevperf 00:39:12.379 ************************************ 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:12.379 ************************************ 00:39:12.379 START TEST nvmf_target_disconnect 00:39:12.379 ************************************ 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:39:12.379 * Looking for test storage... 00:39:12.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:12.379 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:12.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.379 --rc genhtml_branch_coverage=1 00:39:12.379 --rc genhtml_function_coverage=1 00:39:12.379 --rc genhtml_legend=1 00:39:12.379 --rc geninfo_all_blocks=1 00:39:12.380 --rc geninfo_unexecuted_blocks=1 00:39:12.380 00:39:12.380 ' 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:12.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.380 --rc genhtml_branch_coverage=1 00:39:12.380 --rc genhtml_function_coverage=1 00:39:12.380 --rc genhtml_legend=1 00:39:12.380 --rc geninfo_all_blocks=1 00:39:12.380 --rc geninfo_unexecuted_blocks=1 00:39:12.380 00:39:12.380 ' 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:12.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.380 --rc genhtml_branch_coverage=1 00:39:12.380 --rc genhtml_function_coverage=1 00:39:12.380 --rc genhtml_legend=1 00:39:12.380 --rc geninfo_all_blocks=1 00:39:12.380 --rc geninfo_unexecuted_blocks=1 00:39:12.380 00:39:12.380 ' 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:12.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.380 --rc genhtml_branch_coverage=1 00:39:12.380 --rc genhtml_function_coverage=1 00:39:12.380 --rc genhtml_legend=1 00:39:12.380 --rc geninfo_all_blocks=1 00:39:12.380 --rc geninfo_unexecuted_blocks=1 00:39:12.380 00:39:12.380 ' 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:12.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:39:12.380 14:35:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:20.522 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:20.523 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:20.523 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:20.523 Found net devices under 0000:31:00.0: cvl_0_0 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:20.523 Found net devices under 0000:31:00.1: cvl_0_1 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:20.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:20.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:39:20.523 00:39:20.523 --- 10.0.0.2 ping statistics --- 00:39:20.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:20.523 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:20.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:20.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:39:20.523 00:39:20.523 --- 10.0.0.1 ping statistics --- 00:39:20.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:20.523 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:20.523 ************************************ 00:39:20.523 START TEST nvmf_target_disconnect_tc1 00:39:20.523 ************************************ 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:39:20.523 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:20.523 [2024-10-13 14:35:23.674205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:20.524 [2024-10-13 14:35:23.674291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae2290 with addr=10.0.0.2, port=4420 00:39:20.524 [2024-10-13 14:35:23.674323] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:39:20.524 [2024-10-13 14:35:23.674335] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:20.524 [2024-10-13 14:35:23.674344] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:39:20.524 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:39:20.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:39:20.524 Initializing NVMe Controllers 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:20.524 00:39:20.524 real 0m0.235s 00:39:20.524 user 0m0.055s 00:39:20.524 sys 0m0.079s 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:39:20.524 ************************************ 00:39:20.524 END TEST nvmf_target_disconnect_tc1 00:39:20.524 ************************************ 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:20.524 ************************************ 00:39:20.524 START TEST nvmf_target_disconnect_tc2 00:39:20.524 ************************************ 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1990066 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1990066 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1990066 ']' 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:20.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:20.524 14:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:20.524 [2024-10-13 14:35:23.842843] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:39:20.524 [2024-10-13 14:35:23.842910] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:20.524 [2024-10-13 14:35:23.986017] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:20.524 [2024-10-13 14:35:24.037590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:20.524 [2024-10-13 14:35:24.065867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:20.524 [2024-10-13 14:35:24.065910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:20.524 [2024-10-13 14:35:24.065919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:20.524 [2024-10-13 14:35:24.065926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:20.524 [2024-10-13 14:35:24.065932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:20.524 [2024-10-13 14:35:24.068308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:20.524 [2024-10-13 14:35:24.068531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:20.524 [2024-10-13 14:35:24.068694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:39:20.524 [2024-10-13 14:35:24.068696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:21.098 Malloc0 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:21.098 [2024-10-13 14:35:24.748689] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:21.098 [2024-10-13 14:35:24.788950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.098 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:21.360 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.360 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1990336 00:39:21.360 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:39:21.360 14:35:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:23.286 14:35:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1990066 00:39:23.286 14:35:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Write completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Write completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Write completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Write completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Write completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Write completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Write completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Write completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Write completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Write completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Write completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Write completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 [2024-10-13 14:35:26.828642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Write completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Write completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Write completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Write completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Write completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Write completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 Read completed with error (sct=0, sc=8) 00:39:23.286 starting I/O failed 00:39:23.286 [2024-10-13 14:35:26.829020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:23.286 [2024-10-13 14:35:26.829548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.286 [2024-10-13 14:35:26.829615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.286 qpair failed and we were unable to recover it. 00:39:23.286 [2024-10-13 14:35:26.829985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.286 [2024-10-13 14:35:26.830002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.286 qpair failed and we were unable to recover it. 00:39:23.286 [2024-10-13 14:35:26.830503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.286 [2024-10-13 14:35:26.830556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.286 qpair failed and we were unable to recover it. 00:39:23.286 [2024-10-13 14:35:26.830895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.286 [2024-10-13 14:35:26.830910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.286 qpair failed and we were unable to recover it. 00:39:23.286 [2024-10-13 14:35:26.831360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.286 [2024-10-13 14:35:26.831413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.286 qpair failed and we were unable to recover it. 00:39:23.286 [2024-10-13 14:35:26.831697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.286 [2024-10-13 14:35:26.831710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.286 qpair failed and we were unable to recover it. 00:39:23.286 [2024-10-13 14:35:26.832025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.286 [2024-10-13 14:35:26.832037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.286 qpair failed and we were unable to recover it. 00:39:23.286 [2024-10-13 14:35:26.832362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.286 [2024-10-13 14:35:26.832417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.286 qpair failed and we were unable to recover it. 00:39:23.286 [2024-10-13 14:35:26.832635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.286 [2024-10-13 14:35:26.832649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.286 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.832963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.832975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.833342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.833354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.833708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.833720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.834014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.834025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.834368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.834380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.834696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.834708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.835069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.835082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.835348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.835359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.835714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.835725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.836021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.836032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.836188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.836200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.836538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.836550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.836898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.836909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.837240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.837252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.837609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.837620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.837840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.837851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.838108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.838120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.838418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.838430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.838792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.838803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.839157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.839167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.839489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.839500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.839820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.839830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.840096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.840108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.840522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.840532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.840776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.840787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.841078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.841090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.841335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.841346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.841642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.841652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.841958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.841968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.842243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.842253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.842631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.842645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.842946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.842956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.843241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.843252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.843569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.843579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.843884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.843894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.844219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.844230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.844550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.844561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.844735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.844746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.844961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.287 [2024-10-13 14:35:26.844973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.287 qpair failed and we were unable to recover it. 00:39:23.287 [2024-10-13 14:35:26.845304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.845316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.845634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.845644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.845960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.845971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.846149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.846168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.846483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.846494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.846792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.846803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.847117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.847129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.847429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.847440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.847760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.847770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.848147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.848167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.848500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.848510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.848694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.848704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.849072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.849083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.849245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.849256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.849650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.849660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.849991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.850001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.850360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.850373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.850743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.850758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.851077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.851091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.851422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.851434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.851753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.851767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.852110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.852122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.852469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.852482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.852732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.852744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.853130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.853144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.853510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.853522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.853834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.853846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.854215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.854230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.854472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.854486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.854850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.854863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.855174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.855188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.855509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.855521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.855850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.855862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.856185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.856199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.856519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.856532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.856827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.856839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.857024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.857038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.857259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.857272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.857607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.857620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.857941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.288 [2024-10-13 14:35:26.857954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.288 qpair failed and we were unable to recover it. 00:39:23.288 [2024-10-13 14:35:26.858254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.858267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.858618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.858632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.858943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.858956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.859243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.859257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.859578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.859590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.859913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.859926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.860284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.860297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.860604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.860617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.860840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.860852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.861165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.861178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.861544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.861561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.861821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.861837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.862074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.862093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.862431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.862447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.862774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.862799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.863144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.863162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.863517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.863534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.863870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.863886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.864220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.864242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.864575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.864592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.864934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.864952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.865289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.865306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.865626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.865643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.865923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.865939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.866267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.866284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.866639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.866656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.867004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.867020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.867334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.867352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.867680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.867699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.868030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.868048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.868433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.868450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.868778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.868794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.869129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.869147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.869477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.869494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.869813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.869830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.870217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.870235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.870570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.870588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.870974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.870993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.871317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.871335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.871656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.871672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.289 qpair failed and we were unable to recover it. 00:39:23.289 [2024-10-13 14:35:26.871853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.289 [2024-10-13 14:35:26.871872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.872200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.872217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.872547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.872563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.872898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.872919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.873247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.873268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.873615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.873636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.873987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.874008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.874347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.874369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.874692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.874712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.874922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.874946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.875267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.875289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.875629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.875650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.875983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.876005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.876358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.876380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.876715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.876738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.877114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.877137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.877552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.877574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.877926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.877948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.878276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.878302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.878643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.878666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.878875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.878896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.879205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.879227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.879575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.879596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.879945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.879967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.880298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.880321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.880645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.880668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.880882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.880908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.881259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.881281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.881608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.881629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.881964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.881986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.882356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.882377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.882756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.882777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.883205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.883236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.883617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.290 [2024-10-13 14:35:26.883647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.290 qpair failed and we were unable to recover it. 00:39:23.290 [2024-10-13 14:35:26.884022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.884051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.884282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.884312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.884655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.884686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.885030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.885058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.885419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.885448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.885811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.885839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.886089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.886123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.886502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.886531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.886967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.886999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.887222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.887255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.887626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.887656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.888021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.888052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.888425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.888455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.888824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.888854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.889217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.889248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.889493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.889524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.889889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.889920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.890298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.890329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.890688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.890717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.891086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.891117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.891551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.891581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.891959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.891988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.892356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.892386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.892745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.892774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.893144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.893181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.893534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.893563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.893951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.893980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.894335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.894366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.894699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.894729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.895029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.895057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.895479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.895509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.895881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.895910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.896296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.896326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.896672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.896701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.897028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.897059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.897463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.897494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.897819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.897847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.898209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.898240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.898621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.898651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.899010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.899039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.291 qpair failed and we were unable to recover it. 00:39:23.291 [2024-10-13 14:35:26.899415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.291 [2024-10-13 14:35:26.899445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.899808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.899837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.900045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.900089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.900480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.900510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.900848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.900879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.901227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.901258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.901617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.901647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.902015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.902043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.902384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.902414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.902784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.902814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.903052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.903103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.903488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.903518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.903872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.903902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.904277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.904310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.904645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.904675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.905051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.905091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.905441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.905472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.905836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.905866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.906242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.906273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.906644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.906673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.907041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.907112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.907440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.907471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.907678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.907709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.908077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.908108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.908448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.908484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.908815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.908845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.909225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.909256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.909689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.909719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.910081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.910110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.910545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.910574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.910916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.910946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.911317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.911348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.911707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.911736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.912096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.912127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.912526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.912556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.912914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.912942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.913297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.913327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.913693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.913722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.914093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.914124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.914483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.914513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.914882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.292 [2024-10-13 14:35:26.914913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.292 qpair failed and we were unable to recover it. 00:39:23.292 [2024-10-13 14:35:26.915110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.915144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.915398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.915428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.915812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.915841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.916103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.916136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.916506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.916536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.916899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.916928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.917253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.917283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.917636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.917666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.917922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.917951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.918211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.918244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.918516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.918549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.918916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.918946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.919323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.919354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.919717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.919745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.920121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.920152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.920533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.920562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.920925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.920955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.921349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.921380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.921742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.921771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.922208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.922239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.922603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.922633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.923104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.923136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.923520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.923550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.923912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.923948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.924283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.924314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.924679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.924708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.925079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.925109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.925465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.925495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.925727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.925756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.926115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.926145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.926531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.926560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.926925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.926953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.927326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.927356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.927702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.927731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.928096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.928127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.928536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.928565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.928896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.928926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.929324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.929355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.929705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.929735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.293 [2024-10-13 14:35:26.930179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.293 [2024-10-13 14:35:26.930210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.293 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.930619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.930648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.930987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.931016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.931420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.931450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.931782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.931812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.932056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.932096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.932532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.932561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.932775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.932806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.933061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.933101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.933457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.933486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.933870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.933899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.934257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.934286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.934688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.934717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.935113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.935143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.935484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.935513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.935890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.935918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.936298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.936330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.936705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.936735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.936969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.937002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.937337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.937368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.937737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.937767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.938133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.938163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.938532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.938562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.938910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.938938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.939294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.939331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.939701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.939730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.940102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.940131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.940495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.940522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.940872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.940902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.941274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.941305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.941649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.941678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.941926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.941955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.942325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.942354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.942722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.942751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.294 [2024-10-13 14:35:26.943118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.294 [2024-10-13 14:35:26.943148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.294 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.943505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.943534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.943790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.943818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.944184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.944215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.944580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.944610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.944974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.945004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.945360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.945390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.945759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.945789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.946173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.946203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.946582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.946612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.946981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.947009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.947375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.947405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.947768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.947796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.948151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.948180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.948547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.948575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.948921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.948948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.949199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.949229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.949603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.949632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.949996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.950024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.950504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.950534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.950898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.950928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.951297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.951327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.951776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.951804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.952242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.952272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.952519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.952548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.952933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.952962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.953340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.953369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.953729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.953759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.954121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.954152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.954540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.954569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.954912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.954948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.955300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.955331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.955551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.955582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.955943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.955973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.956314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.956347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.956685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.956714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.957005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.957034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.957407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.957437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.957835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.957864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.958114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.295 [2024-10-13 14:35:26.958147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.295 qpair failed and we were unable to recover it. 00:39:23.295 [2024-10-13 14:35:26.958503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.958532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.958921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.958950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.959314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.959346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.959735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.959764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.960106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.960137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.960505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.960534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.960894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.960923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.961300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.961329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.961582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.961611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.961965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.961994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.962385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.962415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.962765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.962795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.963142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.963172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.963524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.963554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.963921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.963950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.964290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.964320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.964677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.964705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.965073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.965105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.965463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.965493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.965854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.965883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.966243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.966274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.966641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.966671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.967033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.967070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.967437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.967465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.967814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.967843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.968202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.968234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.968597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.968627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.968999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.969028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.969381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.969411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.969760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.969789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.970170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.970206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.970549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.970577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.970942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.970971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.971313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.971342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.971709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.971738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.972117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.972146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.972497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.972525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.296 [2024-10-13 14:35:26.972890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.296 [2024-10-13 14:35:26.972919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.296 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.973261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.973291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.973660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.973689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.974048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.974106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.974449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.974479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.974824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.974852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.975191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.975221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.975608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.975636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.975922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.975950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.976319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.976350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.976716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.976745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.977124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.977154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.977518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.977546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.977907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.977936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.978279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.978308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.978665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.978693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.979053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.979092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.979518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.979549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.979907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.979936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.980291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.980321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.980649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.980678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.981051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.981088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.981431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.981459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.981800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.981829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.982197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.982226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.982583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.982611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.982985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.983014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.983365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.983395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.983759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.983788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.984156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.984187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.984562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.984591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.297 [2024-10-13 14:35:26.984954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.297 [2024-10-13 14:35:26.984983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.297 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.985346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.985379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.985734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.985769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.986162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.986192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.986522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.986552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.986917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.986945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.987181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.987210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.987570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.987599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.987852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.987880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.988278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.988308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.988634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.988663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.989028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.989057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.989339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.989368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.989705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.989734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.989962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.989992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.990371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.990402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.990768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.990797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.991139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.991168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.991520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.991549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.991899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.991927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.992340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.992370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.992774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.992802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.993158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.570 [2024-10-13 14:35:26.993188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.570 qpair failed and we were unable to recover it. 00:39:23.570 [2024-10-13 14:35:26.993535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:26.993564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:26.994003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:26.994032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:26.994405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:26.994435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:26.994811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:26.994839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:26.995207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:26.995237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:26.995590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:26.995620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:26.996025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:26.996054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:26.996319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:26.996348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:26.996740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:26.996768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:26.997139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:26.997170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:26.997340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:26.997372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:26.997731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:26.997761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:26.998127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:26.998157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:26.998513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:26.998551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:26.998882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:26.998911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:26.999269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:26.999299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:26.999728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:26.999756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.000120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.000151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.000530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.000559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.000918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.000953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.001295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.001327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.001674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.001703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.002139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.002169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.002516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.002545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.002899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.002928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.003288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.003319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.003663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.003692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.004047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.004085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.004446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.004474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.004816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.004844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.005211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.005242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.005603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.005632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.006000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.006029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.006444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.006475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.006832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.006863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.007215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.007244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.007453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.007484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.007755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.007785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.008144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.008174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.008506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.008536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.571 qpair failed and we were unable to recover it. 00:39:23.571 [2024-10-13 14:35:27.008875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.571 [2024-10-13 14:35:27.008904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.009273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.009303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.009672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.009700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.010088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.010119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.010470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.010499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.010755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.010786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.011155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.011185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.011453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.011481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.011828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.011857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.012195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.012223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.012567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.012596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.012960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.012989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.013336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.013365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.013732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.013760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.014128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.014157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.014427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.014456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.014706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.014737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.015094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.015124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.015492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.015520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.015882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.015918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.016260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.016291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.016658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.016686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.016950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.016979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.017337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.017367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.017744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.017774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.018109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.018139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.018478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.018508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.018879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.018908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.019281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.019311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.019694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.019722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.019970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.020002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.020390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.020420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.020774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.020803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.021175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.021206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.021561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.021590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.021953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.021981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.022357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.022387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.022755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.022784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.023137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.023166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.023463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.023491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.572 qpair failed and we were unable to recover it. 00:39:23.572 [2024-10-13 14:35:27.023858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.572 [2024-10-13 14:35:27.023887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.024235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.024266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.024639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.024667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.025057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.025111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.025459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.025488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.025851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.025880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.026245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.026276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.026639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.026667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.027037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.027073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.027425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.027453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.027829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.027858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.028120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.028150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.028518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.028547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.028909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.028938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.029299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.029328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.029670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.029699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.030079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.030109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.030469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.030497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.030864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.030893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.031243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.031279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.031510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.031541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.031918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.031947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.032312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.032343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.032506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.032536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.032918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.032948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.033200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.033229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.033589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.033619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.033870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.033900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.034276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.034306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.034671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.034699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.035076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.035106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.035336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.035367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.035770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.035799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.036141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.036170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.036561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.036590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.036824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.036855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.037222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.037252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.037596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.037626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.037982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.038011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.038376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.573 [2024-10-13 14:35:27.038407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.573 qpair failed and we were unable to recover it. 00:39:23.573 [2024-10-13 14:35:27.038844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.038873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.039200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.039231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.039567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.039596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.039960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.039988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.040424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.040455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.040848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.040877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.041235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.041266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.041628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.041656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.042019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.042048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.042429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.042459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.042820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.042848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.043107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.043136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.043503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.043532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.043876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.043905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.044249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.044278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.044531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.044562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.044969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.044997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.045235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.045266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.045640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.045669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.046024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.046053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.046423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.046452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.046840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.046869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.047153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.047183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.047560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.047588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.047793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.047824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.048193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.048224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.048594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.048623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.048985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.049013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.049398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.049429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.049791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.049820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.050177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.050207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.050566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.050596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.050957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.050986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.051316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.051348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.574 qpair failed and we were unable to recover it. 00:39:23.574 [2024-10-13 14:35:27.051701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.574 [2024-10-13 14:35:27.051729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.052095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.052125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.052498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.052527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.052895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.052923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.053304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.053333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.053720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.053748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.054081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.054112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.054473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.054502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.054823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.054853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.055205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.055235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.055612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.055641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.056005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.056032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.056451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.056487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.056860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.056890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.057253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.057285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.057656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.057686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.058048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.058086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.058440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.058469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.058810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.058840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.059216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.059246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.059600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.059629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.059999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.060027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.060406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.060436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.060750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.060779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.061132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.061162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.061519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.061548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.061910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.061939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.062312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.062342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.062711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.062740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.063106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.063135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.063495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.063523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.063889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.063917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.064178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.064208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.064560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.064588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.064829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.064860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.065236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.065267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.065635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.065664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.066026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.066056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.066400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.066430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.066683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.066711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.575 [2024-10-13 14:35:27.067070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.575 [2024-10-13 14:35:27.067101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.575 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.067440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.067469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.067836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.067864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.068079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.068112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.068477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.068506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.068853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.068881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.069236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.069266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.069629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.069659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.069917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.069946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.070193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.070223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.070355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.070385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.070758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.070788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.071136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.071172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.071548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.071577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.071979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.072008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.072451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.072481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.072836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.072865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.073237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.073267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.073622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.073651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.074000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.074029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.074431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.074461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.074868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.074898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.075270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.075300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.075670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.075699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.076073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.076102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.076389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.076418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.076778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.076806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.077169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.077199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.077565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.077594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.077933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.077962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.078332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.078361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.078721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.078749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.079124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.079156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.079560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.079589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.080002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.080031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.080453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.080485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.080849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.080880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.081254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.081286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.081690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.081719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.082111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.082143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.082492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.576 [2024-10-13 14:35:27.082522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.576 qpair failed and we were unable to recover it. 00:39:23.576 [2024-10-13 14:35:27.082883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.082914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.083289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.083321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.083658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.083690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.084041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.084081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.084487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.084518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.084872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.084903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.085293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.085324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.085693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.085722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.086027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.086054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.086450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.086480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.086840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.086870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.087237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.087272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.087609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.087638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.087885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.087915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.088334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.088365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.088722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.088752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.089119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.089150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.089516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.089547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.089916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.089946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.090290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.090322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.090556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.090585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.090928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.090958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.091321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.091352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.091708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.091739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.092099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.092129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.092516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.092547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.092922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.092953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.093324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.093354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.093693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.093731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.094085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.094117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.094349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.094379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.094728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.094758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.095126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.095157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.095526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.095555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.095929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.095959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.096325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.096356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.096715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.096747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.097109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.097140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.097501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.097533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.097878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.097908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.577 qpair failed and we were unable to recover it. 00:39:23.577 [2024-10-13 14:35:27.098151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.577 [2024-10-13 14:35:27.098181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.098446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.098475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.098834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.098862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.099232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.099263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.099635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.099665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.100029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.100059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.100334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.100365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.100622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.100651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.101002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.101032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.101343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.101373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.101611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.101643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.101987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.102024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.102424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.102456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.102806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.102836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.103232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.103263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.103509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.103539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.103896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.103926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.104284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.104315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.104676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.104706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.105080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.105112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.105466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.105497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.105856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.105888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.106239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.106270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.106633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.106662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.107028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.107057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.107436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.107468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.107807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.107836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.108196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.108225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.108581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.108613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.108975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.109006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.109301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.109330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.109728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.109758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.110106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.110137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.110512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.110543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.110895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.110926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.111285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.111318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.111675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.111705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.112075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.112106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.112492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.112521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.112766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.112797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.578 qpair failed and we were unable to recover it. 00:39:23.578 [2024-10-13 14:35:27.113143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.578 [2024-10-13 14:35:27.113174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.113551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.113581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.113942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.113973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.114307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.114338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.114684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.114716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.115119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.115150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.115597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.115627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.115987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.116018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.116421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.116453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.116825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.116856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.117216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.117246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.117605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.117641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.118042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.118088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.118446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.118475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.118827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.118859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.119215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.119248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.119610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.119641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.120000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.120031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.120391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.120422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.120780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.120809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.121171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.121205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.121566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.121598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.121959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.121992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.122332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.122366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.122699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.122730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.123131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.123162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.123517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.123546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.123910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.123941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.124294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.124325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.124690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.124720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.125090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.125123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.125495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.125525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.125761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.125793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.126173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.126204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.126553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.126581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.126934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.579 [2024-10-13 14:35:27.126964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.579 qpair failed and we were unable to recover it. 00:39:23.579 [2024-10-13 14:35:27.127212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.127244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.127488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.127518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.127887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.127916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.128256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.128289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.128667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.128698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.129077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.129108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.129357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.129390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.129758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.129789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.130153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.130186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.130529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.130559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.130905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.130934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.131301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.131332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.131691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.131722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.132082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.132114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.132458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.132487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.132832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.132867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.133244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.133274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.133672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.133702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.134056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.134111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.134482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.134513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.134875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.134907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.135240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.135270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.135638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.135668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.136028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.136058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.136420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.136452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.136857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.136888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.137319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.137350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.137702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.137732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.138110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.138142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.138505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.138536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.138900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.138929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.139288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.139319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.139667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.139697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.140077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.140109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.140466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.140496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.140755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.140784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.141147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.141177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.141557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.141587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.141952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.141983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.142359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.580 [2024-10-13 14:35:27.142390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.580 qpair failed and we were unable to recover it. 00:39:23.580 [2024-10-13 14:35:27.142754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.142785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.143143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.143173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.143430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.143459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.143612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.143644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.143969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.144000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.144346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.144377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.144732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.144762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.145019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.145051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.145334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.145367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.145740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.145770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.146010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.146041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.146439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.146470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.146841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.146871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.147229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.147261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.147624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.147654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.148015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.148052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.148467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.148497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.148855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.148886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.149244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.149274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.149628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.149658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.150097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.150128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.150491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.150520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.150885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.150914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.151283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.151313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.151620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.151648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.151894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.151923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.152280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.152311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.152656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.152685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.153058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.153098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.153452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.153480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.153737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.153766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.154124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.154154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.154511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.154540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.154909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.154937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.155308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.155338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.155696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.155725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.156097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.156127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.156473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.156502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.156862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.156892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.157171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.157201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.581 [2024-10-13 14:35:27.157569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.581 [2024-10-13 14:35:27.157599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.581 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.157966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.157995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.158368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.158399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.158659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.158688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.159060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.159099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.159477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.159506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.159856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.159885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.160241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.160271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.160635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.160664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.161034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.161071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.161437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.161466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.161804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.161833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.162195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.162225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.162577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.162607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.162959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.162989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.163274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.163309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.163657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.163686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.164036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.164073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.164417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.164446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.164814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.164844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.165215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.165245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.165621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.165651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.166017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.166045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.166428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.166457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.166817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.166845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.167198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.167229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.167601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.167629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.167996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.168025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.168383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.168413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.168773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.168802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.169147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.169176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.169612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.169641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.170005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.170033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.170290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.170320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.170662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.170690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.171024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.171053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.171412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.171443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.171806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.171835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.172195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.172226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.172584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.172613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.172901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.582 [2024-10-13 14:35:27.172929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.582 qpair failed and we were unable to recover it. 00:39:23.582 [2024-10-13 14:35:27.173300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.173330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.173689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.173719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.174102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.174133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.174488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.174517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.174703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.174731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.175089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.175119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.175340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.175371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.175725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.175754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.176098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.176128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.176252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.176283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.176621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.176650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.177084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.177114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.177474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.177503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.177776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.177804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.178046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.178097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.178453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.178482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.178848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.178876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.179238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.179267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.179627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.179656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.180027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.180056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.180428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.180457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.180827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.180856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.181322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.181352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.181692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.181720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.182090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.182119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.182478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.182506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.182951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.182979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.183318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.183348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.183710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.183739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.184100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.184130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.184500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.184528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.184917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.184945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.185334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.185364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.185729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.185758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.186162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.186192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.583 [2024-10-13 14:35:27.186557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.583 [2024-10-13 14:35:27.186585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.583 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.186944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.186973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.187349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.187379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.187794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.187822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.188243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.188272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.188628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.188657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.188889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.188917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.189239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.189269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.189645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.189674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.189910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.189940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.190318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.190349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.190567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.190598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.190953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.190983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.191341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.191372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.191731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.191760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.192127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.192156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.192517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.192546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.192880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.192908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.193308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.193337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.193707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.193742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.193973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.194005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.194440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.194470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.194817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.194847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.195184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.195214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.195585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.195614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.195975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.196003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.196366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.196396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.196756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.196786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.197057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.197097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.197490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.197518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.197886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.197916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.198247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.198278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.198637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.198667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.199045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.199084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.199431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.199461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.199821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.199850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.200213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.200242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.200609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.200637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.201005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.201034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.201387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.201416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.201777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.584 [2024-10-13 14:35:27.201806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.584 qpair failed and we were unable to recover it. 00:39:23.584 [2024-10-13 14:35:27.202148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.202178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.202518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.202546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.202903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.202933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.203310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.203340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.203603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.203632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.204000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.204030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.204393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.204423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.204771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.204800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.205184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.205214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.205563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.205594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.205968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.205997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.206247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.206280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.206640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.206669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.207018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.207047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.207402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.207431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.207777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.207805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.208169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.208200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.208565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.208594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.208966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.209001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.209354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.209384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.209666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.209696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.209946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.209978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.210330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.210361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.210616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.210646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.211003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.211032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.211398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.211429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.211794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.211824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.212177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.212208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.212518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.212549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.212901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.212930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.213188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.213218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.213589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.213618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.213987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.214017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.214414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.214445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.214856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.214885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.215224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.215256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.215530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.215561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.215905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.215936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.216205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.216236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.216579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.585 [2024-10-13 14:35:27.216609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.585 qpair failed and we were unable to recover it. 00:39:23.585 [2024-10-13 14:35:27.216973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.217001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.217324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.217355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.217719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.217748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.218113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.218144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.218547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.218577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.218955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.218986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.219350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.219379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.219722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.219751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.219980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.220011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.220474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.220503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.220714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.220745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.221124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.221154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.221507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.221535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.221899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.221929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.222291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.222322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.222678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.222707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.223058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.223102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.223457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.223485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.223848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.223884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.224255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.224287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.224646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.224675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.224920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.224948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.225199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.225232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.225612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.225641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.226017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.226046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.226312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.226340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.226610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.226640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.227007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.227037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.227402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.227431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.227803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.227832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.228175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.228206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.228572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.228603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.228939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.228969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.229224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.229255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.229633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.229663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.230025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.230055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.230480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.230510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.230864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.230894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.231302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.231333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.231674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.231704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.586 qpair failed and we were unable to recover it. 00:39:23.586 [2024-10-13 14:35:27.231949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.586 [2024-10-13 14:35:27.231982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.232280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.232311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.232666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.232695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.233091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.233121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.233513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.233543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.233879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.233909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.234219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.234249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.234624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.234654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.235014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.235043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.235428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.235457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.235828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.235858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.236197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.236227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.236585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.236615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.236859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.236888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.237142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.237173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.237543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.237572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.237935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.237964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.238330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.238360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.238714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.238743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.239103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.239134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.239502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.239532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.239890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.239919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.240111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.240142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.240549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.240578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.240836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.240868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.241213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.241243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.241579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.241609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.241915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.241951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.242299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.242329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.242706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.242735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.243100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.243130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.243484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.243513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.243886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.243917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.244251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.244282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.244667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.244695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.587 [2024-10-13 14:35:27.245088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.587 [2024-10-13 14:35:27.245118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.587 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.245455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.245485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.245854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.245883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.246240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.246272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.246628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.246658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.246903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.246931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.247298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.247328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.247697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.247727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.247971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.248001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.248380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.248411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.248683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.248719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.249082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.249113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.249481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.249512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.249875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.249905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.250293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.250323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.250601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.250630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.250962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.250992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.251355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.251385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.251728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.251757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.252118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.252148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.252498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.252527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.252888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.252918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.253291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.253321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.253658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.253688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.254040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.254079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.254413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.254443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.254693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.254727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.255108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.255139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.255492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.255522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.255769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.255800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.256153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.256185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.256545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.256576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.256946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.256975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.257323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.257355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.257703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.257731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.258023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.258052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.258418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.258447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.258824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.258853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.259221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.259251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.259608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.259636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.260000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.588 [2024-10-13 14:35:27.260029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.588 qpair failed and we were unable to recover it. 00:39:23.588 [2024-10-13 14:35:27.260400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.589 [2024-10-13 14:35:27.260432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.589 qpair failed and we were unable to recover it. 00:39:23.589 [2024-10-13 14:35:27.260768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.589 [2024-10-13 14:35:27.260797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.589 qpair failed and we were unable to recover it. 00:39:23.589 [2024-10-13 14:35:27.261195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.589 [2024-10-13 14:35:27.261226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.589 qpair failed and we were unable to recover it. 00:39:23.589 [2024-10-13 14:35:27.261664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.589 [2024-10-13 14:35:27.261695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.589 qpair failed and we were unable to recover it. 00:39:23.589 [2024-10-13 14:35:27.262073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.589 [2024-10-13 14:35:27.262106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.589 qpair failed and we were unable to recover it. 00:39:23.589 [2024-10-13 14:35:27.262451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.589 [2024-10-13 14:35:27.262482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.589 qpair failed and we were unable to recover it. 00:39:23.589 [2024-10-13 14:35:27.262853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.589 [2024-10-13 14:35:27.262884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.589 qpair failed and we were unable to recover it. 00:39:23.589 [2024-10-13 14:35:27.263225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.589 [2024-10-13 14:35:27.263254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.589 qpair failed and we were unable to recover it. 00:39:23.589 [2024-10-13 14:35:27.263489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.589 [2024-10-13 14:35:27.263522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.589 qpair failed and we were unable to recover it. 00:39:23.589 [2024-10-13 14:35:27.263905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.589 [2024-10-13 14:35:27.263940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.589 qpair failed and we were unable to recover it. 00:39:23.589 [2024-10-13 14:35:27.264302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.589 [2024-10-13 14:35:27.264331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.589 qpair failed and we were unable to recover it. 00:39:23.589 [2024-10-13 14:35:27.264719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.589 [2024-10-13 14:35:27.264749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.589 qpair failed and we were unable to recover it. 00:39:23.589 [2024-10-13 14:35:27.265115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.589 [2024-10-13 14:35:27.265147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.589 qpair failed and we were unable to recover it. 00:39:23.863 [2024-10-13 14:35:27.265405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.863 [2024-10-13 14:35:27.265437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.863 qpair failed and we were unable to recover it. 00:39:23.863 [2024-10-13 14:35:27.265799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.863 [2024-10-13 14:35:27.265830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.863 qpair failed and we were unable to recover it. 00:39:23.863 [2024-10-13 14:35:27.266194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.863 [2024-10-13 14:35:27.266224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.863 qpair failed and we were unable to recover it. 00:39:23.863 [2024-10-13 14:35:27.266464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.863 [2024-10-13 14:35:27.266493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.863 qpair failed and we were unable to recover it. 00:39:23.863 [2024-10-13 14:35:27.266848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.863 [2024-10-13 14:35:27.266878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.863 qpair failed and we were unable to recover it. 00:39:23.863 [2024-10-13 14:35:27.267249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.863 [2024-10-13 14:35:27.267281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.863 qpair failed and we were unable to recover it. 00:39:23.863 [2024-10-13 14:35:27.267649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.863 [2024-10-13 14:35:27.267678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.863 qpair failed and we were unable to recover it. 00:39:23.863 [2024-10-13 14:35:27.268071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.863 [2024-10-13 14:35:27.268104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.863 qpair failed and we were unable to recover it. 00:39:23.863 [2024-10-13 14:35:27.268441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.863 [2024-10-13 14:35:27.268471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.863 qpair failed and we were unable to recover it. 00:39:23.863 [2024-10-13 14:35:27.268843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.863 [2024-10-13 14:35:27.268872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.863 qpair failed and we were unable to recover it. 00:39:23.863 [2024-10-13 14:35:27.269229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.863 [2024-10-13 14:35:27.269260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.863 qpair failed and we were unable to recover it. 00:39:23.863 [2024-10-13 14:35:27.269624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.863 [2024-10-13 14:35:27.269653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.863 qpair failed and we were unable to recover it. 00:39:23.863 [2024-10-13 14:35:27.270057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.863 [2024-10-13 14:35:27.270098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.863 qpair failed and we were unable to recover it. 00:39:23.863 [2024-10-13 14:35:27.270463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.863 [2024-10-13 14:35:27.270494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.863 qpair failed and we were unable to recover it. 00:39:23.863 [2024-10-13 14:35:27.270736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.863 [2024-10-13 14:35:27.270765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.863 qpair failed and we were unable to recover it. 00:39:23.863 [2024-10-13 14:35:27.271120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.863 [2024-10-13 14:35:27.271151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.863 qpair failed and we were unable to recover it. 00:39:23.863 [2024-10-13 14:35:27.271494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.863 [2024-10-13 14:35:27.271523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.863 qpair failed and we were unable to recover it. 00:39:23.863 [2024-10-13 14:35:27.271893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.863 [2024-10-13 14:35:27.271921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.863 qpair failed and we were unable to recover it. 00:39:23.863 [2024-10-13 14:35:27.272314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.863 [2024-10-13 14:35:27.272348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.863 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.272666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.272696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.273083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.273115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.273484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.273515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.273881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.273909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.274250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.274279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.274504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.274537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.274941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.274970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.275344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.275376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.275790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.275820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.276182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.276213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.276558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.276588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.276934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.276963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.277303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.277335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.277762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.277791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.278152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.278184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.278598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.278627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.278987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.279016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.279312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.279357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.279753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.279782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.280012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.280048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.280430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.280460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.280820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.280850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.281225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.281256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.281626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.281657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.282003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.282032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.282395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.282425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.282789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.282817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.283182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.283213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.283583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.283611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.283971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.284000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.284348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.284378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.284715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.284744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.285109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.285140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.285522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.285550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.285901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.285930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.286183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.286216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.286566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.286596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.286955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.286984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.287349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.287378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.864 [2024-10-13 14:35:27.287738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.864 [2024-10-13 14:35:27.287768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.864 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.288135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.288165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.288536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.288566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.288839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.288870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.289237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.289269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.289617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.289646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.290017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.290046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.290401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.290430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.290688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.290716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.290955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.290983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.291240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.291268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.291609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.291637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.292049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.292088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.292459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.292488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.292856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.292885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.293228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.293260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.293612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.293640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.294019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.294048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.294480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.294515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.294859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.294887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.295241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.295271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.295670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.295700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.295926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.295957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.296210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.296242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.296584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.296614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.296883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.296913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.297278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.297309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.297662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.297690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.298059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.298108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.298473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.298504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.298886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.298915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.299163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.299192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.299545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.299573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.299948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.299978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.300325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.300355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.300715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.300743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.301116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.301146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.301403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.301433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.301859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.301887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.865 [2024-10-13 14:35:27.302252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.865 [2024-10-13 14:35:27.302281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.865 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.302631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.302661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.302968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.302998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.303359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.303389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.303759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.303788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.304157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.304186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.304524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.304553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.304911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.304941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.305202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.305235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.305578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.305608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.305956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.305984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.306348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.306378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.306625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.306653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.307023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.307052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.307458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.307488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.307838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.307867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.308251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.308280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.308636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.308665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.309026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.309056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.309402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.309438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.309776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.309806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.310156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.310187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.310561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.310590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.310954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.310983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.311268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.311297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.311619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.311650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.311982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.312011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.312357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.312388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.312642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.312671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.313021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.313050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.313394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.313424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.313790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.313819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.314188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.314220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.314593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.314622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.314986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.315015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.315385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.315415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.866 qpair failed and we were unable to recover it. 00:39:23.866 [2024-10-13 14:35:27.315770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.866 [2024-10-13 14:35:27.315800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.316134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.316164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.316520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.316550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.316900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.316931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.317270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.317300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.317562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.317590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.317933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.317963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.318341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.318370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.318733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.318761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.319105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.319137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.319423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.319452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.319824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.319853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.320214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.320245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.320491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.320519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.320857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.320887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.321244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.321275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.321643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.321672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.322048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.322091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.322453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.322484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.322846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.322876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.323126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.323157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.323521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.323551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.323907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.323935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.324294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.324332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.324759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.324789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.325018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.325047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.325406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.325435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.325797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.325831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.326190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.326220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.326567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.326596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.326961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.326991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.327413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.327444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.327666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.327694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.328050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.328097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.328429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.328458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.867 [2024-10-13 14:35:27.328829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.867 [2024-10-13 14:35:27.328857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.867 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.329243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.329272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.329682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.329713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.330124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.330156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.330505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.330534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.330780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.330813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.331052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.331100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.331445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.331474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.331710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.331743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.332126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.332158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.332547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.332577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.332937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.332968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.333331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.333363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.333723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.333755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.334114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.334145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.334520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.334549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.334927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.334958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.335330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.335361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.335700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.335730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.336101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.336133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.336514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.336544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.336902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.336931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.337295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.337325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.337674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.337714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.338085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.338116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.338494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.338524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.338900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.338929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.339271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.339303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.339649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.339688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.340031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.340061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.340435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.340465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.340700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.868 [2024-10-13 14:35:27.340733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.868 qpair failed and we were unable to recover it. 00:39:23.868 [2024-10-13 14:35:27.341096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.341129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.341517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.341546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.341787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.341818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.342079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.342114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.342380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.342411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.342797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.342826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.343189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.343220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.343600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.343631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.343962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.343990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.344356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.344387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.344736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.344767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.345113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.345144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.345512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.345542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.345878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.345908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.346307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.346339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.346676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.346707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.346944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.346975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.347333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.347363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.347730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.347761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.348133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.348164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.348513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.348542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.348908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.348939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.349266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.349299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.349671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.349701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.350045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.350085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.350437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.350467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.350823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.350853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.351192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.351223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.351522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.351551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.351903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.351932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.352276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.352306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.352671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.352701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.353076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.353109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.353464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.353495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.353832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.353862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.354160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.354192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.354485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.354527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.354876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.354907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.355288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.355320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.869 [2024-10-13 14:35:27.355655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.869 [2024-10-13 14:35:27.355684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.869 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.356027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.356058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.356428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.356459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.356800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.356829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.357187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.357219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.357510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.357539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.357928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.357967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.358301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.358333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.358703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.358735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.359106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.359140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.359508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.359538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.359892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.359923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.360291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.360324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.360687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.360720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.361085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.361119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.361505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.361535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.361888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.361918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.362254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.362286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.362649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.362679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.363045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.363086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.363472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.363504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.363740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.363771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.364010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.364040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.364496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.364528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.364890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.364921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.365293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.365324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.365682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.365712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.366079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.366110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.366469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.366498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.366850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.366880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.367241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.367272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.367525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.367555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.367884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.367913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.368171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.368200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.368576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.368609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.368979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.369009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.369247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.369281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.870 [2024-10-13 14:35:27.369656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.870 [2024-10-13 14:35:27.369692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.870 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.370100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.370130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.370503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.370533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.370777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.370810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.371214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.371245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.371620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.371650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.372024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.372054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.372469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.372500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.372859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.372889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.373259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.373290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.373647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.373678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.374041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.374084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.374487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.374517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.374874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.374904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.375286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.375316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.375713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.375745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.375970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.376001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.376376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.376407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.376770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.376803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.377158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.377190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.377620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.377649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.377991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.378021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.380151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.380216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.380619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.380654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.381057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.381124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.381495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.381525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.381879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.381909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.382241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.382275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.382639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.382671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.383041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.383083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.383449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.383482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.383847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.383877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.384243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.384276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.384703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.384734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.385090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.385121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.385475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.385507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.871 [2024-10-13 14:35:27.385913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.871 [2024-10-13 14:35:27.385943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.871 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.386301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.386330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.386690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.386720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.387086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.387115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.387381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.387417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.387814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.387842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.388235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.388266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.388606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.388633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.389002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.389029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.389281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.389311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.389685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.389714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.389957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.389990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.390390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.390420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.390790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.390820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.391199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.391229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.391595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.391625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.391993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.392025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.392424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.392458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.392815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.392848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.393203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.393234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.393592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.393624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.393981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.394011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.394406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.394437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.394687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.394718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.395086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.395119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.395483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.395513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.395880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.395912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.396242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.396277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.396641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.396674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.397039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.397081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.397322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.397357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.397618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.397649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.398025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.398055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.398450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.872 [2024-10-13 14:35:27.398482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.872 qpair failed and we were unable to recover it. 00:39:23.872 [2024-10-13 14:35:27.398835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.398867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.399220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.399254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.399636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.399670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.400025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.400058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.400392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.400423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.400780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.400813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.401179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.401210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.401585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.401615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.401981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.402012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.402374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.402405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.402741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.402777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.403126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.403158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.403558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.403588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.403955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.403984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.404340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.404369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.404733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.404763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.405138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.405171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.405555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.405584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.405945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.405974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.406348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.406378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.406739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.406767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.407136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.407167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.407536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.407566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.407938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.407966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.408347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.408377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.408718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.408747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.408996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.409027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.409287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.409318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.409687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.409717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.410087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.410119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.410481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.410510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.410767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.410795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.411181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.411211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.411606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.411637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.412002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.412031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.412414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.412445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.412804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.873 [2024-10-13 14:35:27.412832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.873 qpair failed and we were unable to recover it. 00:39:23.873 [2024-10-13 14:35:27.413179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.413210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.413582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.413611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.413955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.413983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.414357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.414388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.414747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.414778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.415170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.415200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.415582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.415611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.415974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.416002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.416360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.416388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.416751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.416781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.417161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.417191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.417580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.417608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.417971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.418001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.418243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.418283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.418655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.418685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.419050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.419103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.419461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.419491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.419897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.419926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.420164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.420196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.420564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.420592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.420948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.420978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.421203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.421235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.421598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.421627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.421985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.422015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.422366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.422396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.422644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.422672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.422916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.422947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.423350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.423382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.423730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.423759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.424134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.424166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.424535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.424565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.424899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.424928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.425182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.425211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.425547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.425575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.425943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.425973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.426363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.426394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.426736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.426772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.427115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.427145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.427523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.427552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.874 [2024-10-13 14:35:27.428005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.874 [2024-10-13 14:35:27.428034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.874 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.428368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.428400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.428637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.428668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.429103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.429134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.429388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.429417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.429780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.429809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.430061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.430105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.430354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.430385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.430745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.430775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.431188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.431219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.431597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.431627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.431997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.432027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.432368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.432398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.432754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.432784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.433159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.433188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.433544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.433573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.433908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.433937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.434293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.434323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.434651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.434680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.435088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.435118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.435483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.435512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.435892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.435922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.436288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.436317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.436659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.436687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.437049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.437090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.437449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.437478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.437849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.437879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.438250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.438281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.438521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.438549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.438900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.438928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.439303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.439333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.439693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.439722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.440100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.440131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.440497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.440526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.440900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.440929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.441176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.441206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.875 [2024-10-13 14:35:27.441569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.875 [2024-10-13 14:35:27.441598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.875 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.441967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.441997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.442337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.442368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.442729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.442759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.443129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.443159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.443529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.443562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.443815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.443843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.444176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.444206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.444569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.444598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.444958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.444988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.445352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.445383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.445730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.445758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.446128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.446158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.446533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.446562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.446931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.446960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.447336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.447367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.447738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.447767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.448130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.448160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.448513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.448542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.448897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.448926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.449295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.449326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.449653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.449683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.450044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.450082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.450481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.450510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.450882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.450910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.451246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.451275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.451645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.451674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.452036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.452090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.452453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.452482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.452757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.452786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.453133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.453163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.453518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.453546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.453951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.453984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.454328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.454358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.454713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.454742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.455118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.455150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.455550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.455578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.455933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.455962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.876 [2024-10-13 14:35:27.456337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.876 [2024-10-13 14:35:27.456367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.876 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.456753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.456782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.457120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.457149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.457499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.457528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.457839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.457868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.458251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.458281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.458648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.458677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.459028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.459073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.459402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.459430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.459730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.459757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.460124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.460155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.460525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.460554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.460950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.460980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.461353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.461383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.461642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.461674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.462038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.462079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.462501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.462530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.462861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.462891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.463305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.463337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.463675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.463705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.464084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.464114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.464446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.464475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.464804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.464832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.465195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.465227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.465668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.465699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.466060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.466101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.466336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.466368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.466749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.466779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.467187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.467217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.467593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.467624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.467973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.468003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.468441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.468471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.468829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.468857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.469095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.469127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.877 [2024-10-13 14:35:27.469518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.877 [2024-10-13 14:35:27.469547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.877 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.469959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.469988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.470239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.470270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.470622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.470653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.471028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.471059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.471439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.471468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.471828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.471858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.472119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.472150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.472523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.472551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.472886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.472917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.473286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.473318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.473661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.473690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.474061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.474102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.474380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.474414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.474753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.474783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.475137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.475168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.475499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.475529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.475877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.475906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.476213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.476242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.476492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.476525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.476863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.476893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.477241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.477272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.477634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.477664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.478031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.478059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.478309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.478339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.478719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.478749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.479120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.479148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.479516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.479546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.479910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.479940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.480282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.480313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.480682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.480712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.481082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.481112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.481513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.481542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.481900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.481929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.482295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.482325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.482688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.482716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.483102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.483133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.878 qpair failed and we were unable to recover it. 00:39:23.878 [2024-10-13 14:35:27.483478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.878 [2024-10-13 14:35:27.483507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.483870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.483899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.484273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.484303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.484658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.484687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.485123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.485153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.485529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.485558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.485915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.485943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.486314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.486345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.486754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.486783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.487144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.487174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.487544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.487573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.487903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.487933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.488303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.488333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.488769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.488799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.489139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.489167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.489519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.489548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.489853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.489887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.490253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.490282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.490517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.490549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.490895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.490924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.491161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.491195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.491432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.491465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.491804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.491833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.492191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.492222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.492597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.492626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.493004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.493033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.493423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.493454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.493798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.493828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.494181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.494211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.494446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.494474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.494861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.494892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.495320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.495350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.495708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.495739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.495959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.495989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.496421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.879 [2024-10-13 14:35:27.496452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.879 qpair failed and we were unable to recover it. 00:39:23.879 [2024-10-13 14:35:27.496676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.496707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.497103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.497133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.497569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.497599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.497995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.498024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.498465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.498495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.498836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.498866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.499098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.499130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.499367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.499399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.499761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.499790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.500011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.500042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.500396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.500427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.500788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.500818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.501185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.501215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.501587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.501616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.501992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.502020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.502414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.502447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.502806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.502836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.503208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.503239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.503614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.503643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.504011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.504040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.504402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.504432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.504684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.504718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.505079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.505111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.505517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.505546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.505904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.505933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.506300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.506331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.506670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.506700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.507081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.507111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.507455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.507485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.507820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.507851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.508225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.508255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.508612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.508642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.508991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.509020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.509424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.509453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.509823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.509851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.510221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.510252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.510607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.880 [2024-10-13 14:35:27.510636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.880 qpair failed and we were unable to recover it. 00:39:23.880 [2024-10-13 14:35:27.511005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.511033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.511410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.511440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.511810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.511839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.512195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.512224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.512590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.512620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.512979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.513008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.513375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.513405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.513754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.513783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.514138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.514169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.514412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.514442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.514816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.514847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.515192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.515225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.515561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.515590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.515986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.516015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.516388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.516417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.516774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.516803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.517093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.517123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.517502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.517533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.517890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.517921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.518294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.518325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.518691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.518720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.519109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.519158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.519543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.519572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.519932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.519961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.520334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.520371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.520720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.520749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.521089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.521118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.521375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.521404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.521743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.521773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.522006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.522035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.522420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.522450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.522805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.522834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.523211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.523242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.523603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.523631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.523993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.524024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.881 qpair failed and we were unable to recover it. 00:39:23.881 [2024-10-13 14:35:27.524370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.881 [2024-10-13 14:35:27.524401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.524734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.524763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.525099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.525129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.525469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.525500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.525858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.525886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.526260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.526290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.526541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.526573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.526940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.526971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.527344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.527376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.527743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.527772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.528146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.528175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.528526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.528555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.528783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.528815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.529186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.529217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.529585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.529614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.530037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.530076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.530455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.530487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.530839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.530868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.531205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.531238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.531462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.531494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.531853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.531882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.532255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.532285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.532618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.532647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.533048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.533089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.533442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.533473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.533906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.533935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.534185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.534214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.534581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.534610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.534978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.535007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.535357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.535393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.535764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.535794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.536143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.536173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.536529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.536559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.536926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.536955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.882 qpair failed and we were unable to recover it. 00:39:23.882 [2024-10-13 14:35:27.537294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.882 [2024-10-13 14:35:27.537325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.537670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.537699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.538076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.538107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.538464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.538493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.538870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.538898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.539151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.539183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.539549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.539579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.539939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.539969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.540304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.540334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.540714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.540744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.541108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.541137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.541508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.541537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.541907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.541936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.542311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.542340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.542584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.542614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.542972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.543001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.543378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.543409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.543751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.543781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.544155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.544184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.544521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.544550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.544907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.544937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.545209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.545238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.545621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.545651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.546104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.546134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.546423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.546451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.546842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.546871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.547199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.547231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.547475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.547507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.547884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.547913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.548297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.548326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.548686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.548715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.549100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.549130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.549476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.549504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.549937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.549967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.550301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.550341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.550684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.550718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.883 [2024-10-13 14:35:27.551124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.883 [2024-10-13 14:35:27.551154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.883 qpair failed and we were unable to recover it. 00:39:23.884 [2024-10-13 14:35:27.551511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.884 [2024-10-13 14:35:27.551541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.884 qpair failed and we were unable to recover it. 00:39:23.884 [2024-10-13 14:35:27.551917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.884 [2024-10-13 14:35:27.551945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.884 qpair failed and we were unable to recover it. 00:39:23.884 [2024-10-13 14:35:27.552290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.884 [2024-10-13 14:35:27.552322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.884 qpair failed and we were unable to recover it. 00:39:23.884 [2024-10-13 14:35:27.552691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.884 [2024-10-13 14:35:27.552720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.884 qpair failed and we were unable to recover it. 00:39:23.884 [2024-10-13 14:35:27.553086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.884 [2024-10-13 14:35:27.553116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.884 qpair failed and we were unable to recover it. 00:39:23.884 [2024-10-13 14:35:27.553473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.884 [2024-10-13 14:35:27.553503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.884 qpair failed and we were unable to recover it. 00:39:23.884 [2024-10-13 14:35:27.553803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.884 [2024-10-13 14:35:27.553833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.884 qpair failed and we were unable to recover it. 00:39:23.884 [2024-10-13 14:35:27.554213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.884 [2024-10-13 14:35:27.554243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.884 qpair failed and we were unable to recover it. 00:39:23.884 [2024-10-13 14:35:27.554526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.884 [2024-10-13 14:35:27.554554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.884 qpair failed and we were unable to recover it. 00:39:23.884 [2024-10-13 14:35:27.554928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.884 [2024-10-13 14:35:27.554957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.884 qpair failed and we were unable to recover it. 00:39:23.884 [2024-10-13 14:35:27.555299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.884 [2024-10-13 14:35:27.555331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.884 qpair failed and we were unable to recover it. 00:39:23.884 [2024-10-13 14:35:27.555670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.884 [2024-10-13 14:35:27.555700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.884 qpair failed and we were unable to recover it. 00:39:23.884 [2024-10-13 14:35:27.556074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.884 [2024-10-13 14:35:27.556104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.884 qpair failed and we were unable to recover it. 00:39:23.884 [2024-10-13 14:35:27.556325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.884 [2024-10-13 14:35:27.556357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.884 qpair failed and we were unable to recover it. 00:39:23.884 [2024-10-13 14:35:27.556720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.884 [2024-10-13 14:35:27.556749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.884 qpair failed and we were unable to recover it. 00:39:23.884 [2024-10-13 14:35:27.557129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:23.884 [2024-10-13 14:35:27.557160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:23.884 qpair failed and we were unable to recover it. 00:39:24.159 [2024-10-13 14:35:27.557525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.159 [2024-10-13 14:35:27.557556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.159 qpair failed and we were unable to recover it. 00:39:24.159 [2024-10-13 14:35:27.557816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.159 [2024-10-13 14:35:27.557847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.159 qpair failed and we were unable to recover it. 00:39:24.159 [2024-10-13 14:35:27.558194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.159 [2024-10-13 14:35:27.558224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.159 qpair failed and we were unable to recover it. 00:39:24.159 [2024-10-13 14:35:27.558493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.159 [2024-10-13 14:35:27.558522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.159 qpair failed and we were unable to recover it. 00:39:24.159 [2024-10-13 14:35:27.558872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.159 [2024-10-13 14:35:27.558900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.159 qpair failed and we were unable to recover it. 00:39:24.159 [2024-10-13 14:35:27.559159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.159 [2024-10-13 14:35:27.559190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.159 qpair failed and we were unable to recover it. 00:39:24.159 [2024-10-13 14:35:27.559534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.159 [2024-10-13 14:35:27.559564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.159 qpair failed and we were unable to recover it. 00:39:24.159 [2024-10-13 14:35:27.559934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.159 [2024-10-13 14:35:27.559963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.159 qpair failed and we were unable to recover it. 00:39:24.159 [2024-10-13 14:35:27.560265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.159 [2024-10-13 14:35:27.560295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.159 qpair failed and we were unable to recover it. 00:39:24.159 [2024-10-13 14:35:27.560659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.560688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.561053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.561094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.561446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.561474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.561830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.561860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.562217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.562248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.562624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.562652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.563020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.563049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.563395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.563425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.563805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.563834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.564186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.564217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.564580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.564610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.564984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.565013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.565384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.565413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.565793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.565827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.566111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.566140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.566489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.566517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.566824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.566860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.567190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.567220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.567586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.567615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.567980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.568008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.568379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.568410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.568747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.568776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.569152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.569182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.569546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.569576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.569911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.569940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.570292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.570322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.570575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.570604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.570966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.570995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.571245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.571275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.571706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.571736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.572100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.572131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.572484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.572513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.572878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.572906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.573131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.573162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.573538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.573568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.573939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.573969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.574368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.574399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.574761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.574789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.575140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.575169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.575467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.160 [2024-10-13 14:35:27.575496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.160 qpair failed and we were unable to recover it. 00:39:24.160 [2024-10-13 14:35:27.575853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.575883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.576263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.576294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.576663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.576692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.577053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.577092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.577495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.577524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.577883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.577912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.578142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.578174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.578536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.578566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.578916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.578946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.579295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.579325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.579446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.579477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.579751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.579780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.580140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.580172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.580515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.580552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.580928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.580958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.581309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.581343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.581689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.581717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.582059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.582111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.582505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.582534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.582894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.582922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.583301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.583331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.583696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.583726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.584130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.584160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.584392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.584424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.584816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.584845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.585177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.585209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.585565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.585595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.585967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.585998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.586366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.586395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.586803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.586832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.587184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.587215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.587572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.587601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.587955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.587986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.588360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.588393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.588742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.588771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.589015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.589043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.589420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.589450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.589846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.589878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.590235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.590266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.590628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.161 [2024-10-13 14:35:27.590657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.161 qpair failed and we were unable to recover it. 00:39:24.161 [2024-10-13 14:35:27.590884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.590916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.591301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.591333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.591697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.591726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.592075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.592107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.592465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.592496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.592860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.592890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.593091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.593125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.593395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.593425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.593769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.593797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.594178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.594208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.594567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.594596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.594945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.594976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.595317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.595349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.595716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.595752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.596105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.596135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.596526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.596556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.596914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.596943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.597326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.597357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.597707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.597737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.598100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.598131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.598484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.598514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.598880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.598909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.599284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.599316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.599665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.599695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.600076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.600107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.600460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.600490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.600748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.600779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.601099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.601129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.601481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.601511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.601876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.601907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.602273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.602302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.602549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.602581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.602950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.602981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.603341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.603373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.603636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.603665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.604014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.604053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.604439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.604471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.604829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.604858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.605232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.605262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.605630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.605661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.162 qpair failed and we were unable to recover it. 00:39:24.162 [2024-10-13 14:35:27.606048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.162 [2024-10-13 14:35:27.606091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.606440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.606471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.606840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.606870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.607235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.607267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.607520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.607551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.607902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.607932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.608303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.608334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.608696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.608727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.609089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.609119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.609467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.609497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.609843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.609874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.610247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.610281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.610534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.610564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.610925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.610961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.611218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.611251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.611582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.611613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.611996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.612025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.612375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.612407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.612775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.612805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.613175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.613207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.613503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.613533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.613894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.613924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.614291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.614320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.614689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.614718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.615085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.615116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.615395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.615425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.615780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.615812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.616180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.616213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.616555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.616585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.616809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.616843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.617272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.617302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.617659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.617688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.618087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.618119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.618542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.618572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.618942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.618973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.619330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.619360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.619726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.619755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.620120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.620152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.620511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.620543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.163 [2024-10-13 14:35:27.620976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.163 [2024-10-13 14:35:27.621007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.163 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.621406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.621439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.621793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.621833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.622182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.622213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.622593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.622624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.623074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.623107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.623548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.623578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.623957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.623987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.624356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.624386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.624785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.624815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.625178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.625209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.625560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.625599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.625943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.625972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.626132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.626162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.626537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.626567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.626937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.626967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.627343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.627377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.627732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.627771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.628153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.628202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.628546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.628597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.628922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.628972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.629393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.629450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.629837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.629883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.630332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.630379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.630735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.630775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.631161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.631192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.631557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.631589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.631835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.631869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.632124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.632155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.632506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.632535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.632906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.632936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.633297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.633328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.633682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.633712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.634095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.634127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.634506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.634536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.634801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.164 [2024-10-13 14:35:27.634831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.164 qpair failed and we were unable to recover it. 00:39:24.164 [2024-10-13 14:35:27.635106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.635138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.635498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.635527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.635886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.635918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.636202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.636235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.636602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.636631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.636986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.637025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.637456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.637487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.637854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.637882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.638146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.638176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.638536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.638566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.638978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.639007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.639400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.639430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.639667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.639699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.639978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.640007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.640267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.640302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.640667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.640697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.641113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.641142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.641492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.641520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.641884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.641913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.642140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.642173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.642529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.642558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.642810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.642839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.643188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.643219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.643634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.643663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.643917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.643946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.644308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.644338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.644692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.644722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.645096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.645127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.645555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.645584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.645910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.645939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.646295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.646327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.646665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.646695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.647052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.647092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.647472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.647501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.647855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.647883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.648135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.648169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.648534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.648563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.648916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.648945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.649298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.649329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.649763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.649792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.165 qpair failed and we were unable to recover it. 00:39:24.165 [2024-10-13 14:35:27.650135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.165 [2024-10-13 14:35:27.650166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.650531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.650560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.650911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.650941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.651280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.651310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.651674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.651703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.651946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.651982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.652340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.652371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.652721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.652750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.653122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.653151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.653513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.653542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.653916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.653945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.654404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.654435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.654801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.654829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.655084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.655117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.655372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.655403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.655764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.655793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.656145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.656175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.656540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.656569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.656934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.656963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.657307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.657337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.657694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.657723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.657976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.658009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.658401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.658431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.658790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.658820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.659187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.659218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.659617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.659646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.660012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.660040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.660419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.660449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.660688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.660717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.661059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.661104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.661440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.661469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.661829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.661858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.662116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.662146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.662433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.662463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.662829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.662858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.663206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.663238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.663611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.663640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.663997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.664026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.664424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.664454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.664823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.664852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.166 qpair failed and we were unable to recover it. 00:39:24.166 [2024-10-13 14:35:27.665218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.166 [2024-10-13 14:35:27.665248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.665616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.665644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.666009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.666040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.666426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.666456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.666817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.666846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.667202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.667241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.667594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.667623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.667893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.667921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.668298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.668328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.668694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.668722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.669104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.669134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.669421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.669450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.669825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.669855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.670204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.670236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.670574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.670604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.670991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.671020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.671392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.671421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.671675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.671704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.672074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.672103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.672351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.672381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.672723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.672754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.673132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.673162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.673537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.673568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.673929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.673959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.674350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.674380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.674734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.674765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.675133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.675163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.675532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.675560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.675909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.675938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.676324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.676354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.676699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.676727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.677096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.677126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.677487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.677519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.677891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.677921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.678305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.678335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.678697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.678726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.679083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.679114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.679510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.679538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.679891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.679921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.680273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.680304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.167 [2024-10-13 14:35:27.680676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.167 [2024-10-13 14:35:27.680705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.167 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.681096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.681126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.681408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.681437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.681789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.681819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.682187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.682217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.682571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.682611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.682950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.682978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.683231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.683262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.683499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.683527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.683888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.683917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.684245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.684276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.684707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.684736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.685026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.685054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.685287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.685321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.685700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.685728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.685958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.685988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.686232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.686264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.686649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.686681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.686894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.686924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.687190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.687220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.687593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.687623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.688001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.688030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.688390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.688419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.688778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.688808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.689170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.689203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.689573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.689603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.689981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.690011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.690370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.690402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.690597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.690629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.690886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.690916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.691167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.691200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.691467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.691496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.691887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.691919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.692272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.692303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.692705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.692734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.693082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.168 [2024-10-13 14:35:27.693121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.168 qpair failed and we were unable to recover it. 00:39:24.168 [2024-10-13 14:35:27.693473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.693504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.693736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.693768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.694150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.694182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.694551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.694580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.694958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.694986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.695337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.695367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.695734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.695762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.696126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.696156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.696545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.696574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.696931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.696966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.697317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.697347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.697705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.697734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.698095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.698124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.698496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.698526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.698874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.698904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.699257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.699288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.699652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.699681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.700038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.700101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.700378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.700407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.700770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.700800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.701163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.701193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.701572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.701601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.701970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.701998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.702265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.702298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.702644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.702674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.702916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.702945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.703314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.703346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.703712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.703741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.704102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.704132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.704505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.704534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.704887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.704917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.705207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.705235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.705449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.705481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.705842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.705872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.706217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.706255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.706604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.706633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.706997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.707027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.707418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.707448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.707792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.707821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.708073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.708105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.169 [2024-10-13 14:35:27.708393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.169 [2024-10-13 14:35:27.708424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.169 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.708787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.708816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.709042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.709085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.709454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.709484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.709851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.709880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.710197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.710227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.710586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.710615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.710987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.711015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.711413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.711444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.711800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.711835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.712183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.712213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.712578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.712608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.712968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.712997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.713366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.713395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.713748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.713776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.714050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.714090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.714495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.714524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.714826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.714855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.715207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.715237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.715603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.715632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.716008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.716037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.716280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.716310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.716668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.716698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.717037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.717076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.717426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.717455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.717842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.717872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.718237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.718268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.718653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.718681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.719038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.719100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.719446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.719479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.719842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.719872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.720238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.720268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.720635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.720664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.721034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.721075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.721408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.721436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.721798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.721828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.722182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.722214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.722587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.722617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.722988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.723017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.723456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.723487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.723926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.170 [2024-10-13 14:35:27.723955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.170 qpair failed and we were unable to recover it. 00:39:24.170 [2024-10-13 14:35:27.724317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.724347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.724684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.724713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.725148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.725179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.725518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.725549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.725894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.725923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.726288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.726319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.726683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.726713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.727089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.727119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.727487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.727523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.727862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.727892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.728241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.728272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.728641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.728670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.729043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.729081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.729435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.729464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.729838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.729867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.730230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.730259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.730602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.730632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.730992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.731022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.731458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.731489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.731752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.731781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.732151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.732181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.732470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.732501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.732878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.732909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.733201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.733231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.733621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.733650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.733904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.733936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.734320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.734350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.734707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.734736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.735094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.735125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.735382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.735414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.735827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.735857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.736298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.736330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.736710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.736740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.736989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.737021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.737307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.737338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.737567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.737599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.737723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.737755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.737977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.738009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.738397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.738428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.738822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.738852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.171 qpair failed and we were unable to recover it. 00:39:24.171 [2024-10-13 14:35:27.739214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.171 [2024-10-13 14:35:27.739245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.739597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.739628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.740076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.740108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.740478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.740508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.740798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.740826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.741203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.741238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.741655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.741685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.742051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.742093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.742451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.742489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.742850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.742880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.743255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.743285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.743649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.743680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.743916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.743945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.744319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.744348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.744685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.744714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.745077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.745107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.745465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.745494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.745857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.745885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.746138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.746169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.746436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.746467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.746832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.746862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.747106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.747138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.747391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.747421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.747801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.747831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.748122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.748151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.748522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.748550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.748908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.748937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.749285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.749316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.749547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.749580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.749928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.749958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.750298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.750328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.750689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.750719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.751084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.751115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.751496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.751526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.751891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.751922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.752286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.172 [2024-10-13 14:35:27.752318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.172 qpair failed and we were unable to recover it. 00:39:24.172 [2024-10-13 14:35:27.752680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.752709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.753137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.753169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.753526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.753555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.753926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.753955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.754331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.754360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.754693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.754723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.755106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.755139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.755494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.755525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.755893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.755923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.756265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.756299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.756648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.756680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.757017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.757048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.757401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.757444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.757789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.757819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.758186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.758221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.758504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.758535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.758806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.758839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.759175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.759205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.759594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.759624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.759889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.759918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.760294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.760326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.760679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.760710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.761086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.761119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.761370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.761399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.761760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.761791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.762156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.762188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.762555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.762586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.762946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.762975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.763362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.763393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.763737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.763766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.764139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.764170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.764525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.764562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.764909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.764939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.765204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.765233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.765623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.765653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.766006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.766037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.766223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.766253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.766621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.766650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.767008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.767040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.767314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.173 [2024-10-13 14:35:27.767348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.173 qpair failed and we were unable to recover it. 00:39:24.173 [2024-10-13 14:35:27.767690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.767722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.768061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.768120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.768348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.768381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.768759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.768789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.769026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.769056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.769447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.769477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.769845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.769874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.770232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.770262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.770632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.770661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.771022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.771051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.771435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.771465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.771830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.771860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.772105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.772146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.772506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.772535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.772890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.772918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.773275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.773306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.773667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.773696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.774076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.774107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.774468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.774497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.774860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.774888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.775284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.775314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.775681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.775710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.776082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.776112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.776419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.776450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.776772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.776802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.777048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.777105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.777430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.777461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.777835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.777865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.778237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.778269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.778632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.778663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.779032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.779071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.779322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.779351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.779585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.779614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.779973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.780003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.780219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.780249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.780622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.780651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.781019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.781049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.781294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.781327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.781704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.781734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.782104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.782135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.174 [2024-10-13 14:35:27.782480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.174 [2024-10-13 14:35:27.782509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.174 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.782874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.782903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.783275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.783307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.783670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.783701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.783967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.783996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.784323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.784353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.784720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.784750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.785124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.785153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.785539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.785568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.785929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.785960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.786326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.786357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.786709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.786747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.787121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.787157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.787414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.787442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.787821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.787849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.788190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.788222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.788612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.788641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.789001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.789030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.789430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.789461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.789828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.789857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.790217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.790247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.790608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.790637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.791008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.791038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.791287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.791319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.791694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.791723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.792091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.792122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.792490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.792521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.792869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.792898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.793330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.793360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.793707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.793737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.794052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.794107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.794506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.794535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.794887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.794916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.795291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.795322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.795614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.795644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.796024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.796053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.796429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.796458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.796822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.796851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.797201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.797231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.797451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.797483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.797841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.175 [2024-10-13 14:35:27.797870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.175 qpair failed and we were unable to recover it. 00:39:24.175 [2024-10-13 14:35:27.798118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.798147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.798522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.798551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.798922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.798952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.799299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.799329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.799684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.799715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.800084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.800116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.800489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.800517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.800774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.800805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.801169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.801199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.801543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.801571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.801804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.801835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.802239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.802275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.802626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.802657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.803001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.803031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.803411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.803442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.803801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.803829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.804195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.804226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.804608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.804639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.805027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.805057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.805414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.805446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.805707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.805736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.806122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.806152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.806537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.806565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.806927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.806957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.807308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.807338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.807593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.807623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.807997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.808027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.808296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.808327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.808698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.808727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.809091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.809122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.809489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.809518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.809885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.809915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.810251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.810281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.810646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.810675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.811039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.811088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.811449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.811480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.176 [2024-10-13 14:35:27.811852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.176 [2024-10-13 14:35:27.811881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.176 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.812243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.812275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.812638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.812668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.813043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.813081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.813315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.813343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.813593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.813620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.813963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.813991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.814333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.814364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.814728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.814760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.815122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.815153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.815524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.815553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.815898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.815928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.816301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.816331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.816697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.816727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.817091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.817123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.817470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.817506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.817851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.817880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.818227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.818258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.818626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.818654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.819022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.819052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.819426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.819465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.819791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.819820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.820188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.820220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.820557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.820586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.820948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.820976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.821340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.821371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.821712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.821742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.822036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.822075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.822428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.822458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.822763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.822793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.823157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.823187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.823441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.823469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.823831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.823860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.824030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.824074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.824473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.824504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.824867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.824895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.825251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.825280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.825653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.825682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.826046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.826085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.826425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.177 [2024-10-13 14:35:27.826454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.177 qpair failed and we were unable to recover it. 00:39:24.177 [2024-10-13 14:35:27.826799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.826828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.827198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.827230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.827598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.827633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.827976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.828006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.828397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.828428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.828769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.828798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.829026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.829058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.829398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.829427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.829790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.829819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.830257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.830289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.830713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.830743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.831106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.831150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.831500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.831533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.831902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.831933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.832302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.832334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.832696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.832728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.833095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.833128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.833367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.833398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.833687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.833719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.833998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.834028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.834396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.834428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.834755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.834785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.835146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.835177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.835580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.835610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.835972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.836003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.836245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.836281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.836643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.836674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.837023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.837054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.837331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.837361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.837718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.837750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.838119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.838153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.838423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.838454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.838895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.838925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.839351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.839381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.839733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.839764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.840119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.840150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.840608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.840640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.840873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.840907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.841324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.841356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.841705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.841738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.178 qpair failed and we were unable to recover it. 00:39:24.178 [2024-10-13 14:35:27.842157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.178 [2024-10-13 14:35:27.842189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.842524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.842554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.842921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.842957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.843319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.843351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.843718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.843749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.844111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.844143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.844497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.844529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.844888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.844918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.845220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.845251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.845601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.845633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.845989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.846019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.846425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.846461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.846812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.846842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.847191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.847224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.847637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.847666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.848033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.848072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.848451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.848481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.848854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.848884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.849249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.849278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.849669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.849698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.850061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.850115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.850459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.850488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.850761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.850790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.851162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.851193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.851451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.851482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.851772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.851802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.852147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.852177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.179 [2024-10-13 14:35:27.852552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.179 [2024-10-13 14:35:27.852584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.179 qpair failed and we were unable to recover it. 00:39:24.475 [2024-10-13 14:35:27.852923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.475 [2024-10-13 14:35:27.852956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.475 qpair failed and we were unable to recover it. 00:39:24.475 [2024-10-13 14:35:27.853393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.475 [2024-10-13 14:35:27.853426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.475 qpair failed and we were unable to recover it. 00:39:24.475 [2024-10-13 14:35:27.853789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.475 [2024-10-13 14:35:27.853820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.475 qpair failed and we were unable to recover it. 00:39:24.475 [2024-10-13 14:35:27.855760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.475 [2024-10-13 14:35:27.855820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.475 qpair failed and we were unable to recover it. 00:39:24.475 [2024-10-13 14:35:27.856225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.475 [2024-10-13 14:35:27.856261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.475 qpair failed and we were unable to recover it. 00:39:24.475 [2024-10-13 14:35:27.856648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.475 [2024-10-13 14:35:27.856687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.475 qpair failed and we were unable to recover it. 00:39:24.475 [2024-10-13 14:35:27.857038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.475 [2024-10-13 14:35:27.857080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.475 qpair failed and we were unable to recover it. 00:39:24.475 [2024-10-13 14:35:27.857469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.475 [2024-10-13 14:35:27.857500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.475 qpair failed and we were unable to recover it. 00:39:24.475 [2024-10-13 14:35:27.857862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.475 [2024-10-13 14:35:27.857892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.475 qpair failed and we were unable to recover it. 00:39:24.475 [2024-10-13 14:35:27.858248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.475 [2024-10-13 14:35:27.858280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.475 qpair failed and we were unable to recover it. 00:39:24.475 [2024-10-13 14:35:27.858650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.475 [2024-10-13 14:35:27.858680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.475 qpair failed and we were unable to recover it. 00:39:24.475 [2024-10-13 14:35:27.858929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.475 [2024-10-13 14:35:27.858958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.475 qpair failed and we were unable to recover it. 00:39:24.475 [2024-10-13 14:35:27.859221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.475 [2024-10-13 14:35:27.859252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.475 qpair failed and we were unable to recover it. 00:39:24.475 [2024-10-13 14:35:27.859641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.475 [2024-10-13 14:35:27.859671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.475 qpair failed and we were unable to recover it. 00:39:24.475 [2024-10-13 14:35:27.860006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.475 [2024-10-13 14:35:27.860045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.860457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.860486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.860852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.860881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.861244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.861275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.861514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.861545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.861813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.861842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.862191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.862224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.862573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.862603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.862946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.862976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.863376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.863407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.863776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.863806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.864146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.864177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.864510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.864540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.864895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.864926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.865292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.865324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.865685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.865715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.866088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.866122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.866493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.866523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.866813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.866842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.867209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.867239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.867602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.867632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.867997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.868027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.868422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.868454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.868818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.868849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.869216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.869245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.869615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.869646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.870010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.870042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.476 [2024-10-13 14:35:27.870468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.476 [2024-10-13 14:35:27.870500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.476 qpair failed and we were unable to recover it. 00:39:24.477 [2024-10-13 14:35:27.870838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.477 [2024-10-13 14:35:27.870869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.477 qpair failed and we were unable to recover it. 00:39:24.477 [2024-10-13 14:35:27.871222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.477 [2024-10-13 14:35:27.871253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.477 qpair failed and we were unable to recover it. 00:39:24.477 [2024-10-13 14:35:27.871624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.477 [2024-10-13 14:35:27.871654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.477 qpair failed and we were unable to recover it. 00:39:24.477 [2024-10-13 14:35:27.872028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.477 [2024-10-13 14:35:27.872058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.477 qpair failed and we were unable to recover it. 00:39:24.477 [2024-10-13 14:35:27.872434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.477 [2024-10-13 14:35:27.872464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.477 qpair failed and we were unable to recover it. 00:39:24.477 [2024-10-13 14:35:27.872825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.477 [2024-10-13 14:35:27.872854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.477 qpair failed and we were unable to recover it. 00:39:24.477 [2024-10-13 14:35:27.873107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.477 [2024-10-13 14:35:27.873141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.477 qpair failed and we were unable to recover it. 00:39:24.477 [2024-10-13 14:35:27.873504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.477 [2024-10-13 14:35:27.873534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.477 qpair failed and we were unable to recover it. 00:39:24.477 [2024-10-13 14:35:27.873899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.477 [2024-10-13 14:35:27.873930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.477 qpair failed and we were unable to recover it. 00:39:24.477 [2024-10-13 14:35:27.874290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.477 [2024-10-13 14:35:27.874320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.477 qpair failed and we were unable to recover it. 00:39:24.477 [2024-10-13 14:35:27.874583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.477 [2024-10-13 14:35:27.874614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.477 qpair failed and we were unable to recover it. 00:39:24.477 [2024-10-13 14:35:27.875007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.477 [2024-10-13 14:35:27.875035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.477 qpair failed and we were unable to recover it. 00:39:24.477 [2024-10-13 14:35:27.875416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.477 [2024-10-13 14:35:27.875453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.477 qpair failed and we were unable to recover it. 00:39:24.477 [2024-10-13 14:35:27.875796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.477 [2024-10-13 14:35:27.875830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.477 qpair failed and we were unable to recover it. 00:39:24.477 [2024-10-13 14:35:27.876192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.477 [2024-10-13 14:35:27.876224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.477 qpair failed and we were unable to recover it. 00:39:24.477 [2024-10-13 14:35:27.876564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.477 [2024-10-13 14:35:27.876593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.477 qpair failed and we were unable to recover it. 00:39:24.477 [2024-10-13 14:35:27.876934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.477 [2024-10-13 14:35:27.876964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.477 qpair failed and we were unable to recover it. 00:39:24.477 [2024-10-13 14:35:27.877352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.477 [2024-10-13 14:35:27.877382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.477 qpair failed and we were unable to recover it. 00:39:24.477 [2024-10-13 14:35:27.877752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.877782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.878150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.878183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.878522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.878553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.878915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.878944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.879252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.879283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.879650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.879679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.880044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.880085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.880466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.880497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.880941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.880972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.881214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.881248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.881623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.881654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.882032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.882089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.882484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.882514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.882893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.882924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.883293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.883325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.883690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.883720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.884143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.884175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.884520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.884557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.884893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.884924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.885222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.885252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.885622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.885652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.886083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.886114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.886515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.886544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.886910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.886940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.887358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.478 [2024-10-13 14:35:27.887389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.478 qpair failed and we were unable to recover it. 00:39:24.478 [2024-10-13 14:35:27.887731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.887760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.888146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.888177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.888531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.888562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.888820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.888849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.889199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.889231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.889668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.889697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.890054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.890095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.890450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.890480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.890915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.890945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.891291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.891328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.891697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.891728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.892088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.892122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.892487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.892517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.892931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.892961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.893317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.893354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.893705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.893734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.894106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.894140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.894475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.894505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.894868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.894898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.895242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.895272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.895637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.895666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.896037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.896079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.896439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.896469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.896854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.896886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.897249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.897281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.897623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.897655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.898011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.898040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.898402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.898432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.898691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.898720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.899087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.899119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.899466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.899496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.479 [2024-10-13 14:35:27.899840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.479 [2024-10-13 14:35:27.899871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.479 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.900239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.900269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.900637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.900667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.901038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.901089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.901365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.901400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.901778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.901809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.902081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.902112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.902356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.902385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.902764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.902793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.903156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.903187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.903437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.903470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.903849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.903879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.904149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.904178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.904558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.904588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.904957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.904986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.905354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.905385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.905681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.905709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.906094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.906126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.906485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.906521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.906888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.906917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.907290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.907321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.907675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.907703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.907954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.907983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.908320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.908351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.908692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.908722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.908981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.909010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.909408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.909438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.909742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.909770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.910018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.910047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.910441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.910471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.910834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.910862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.911125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.911158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.911550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.911579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.911943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.911972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.912328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.912359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.912731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.480 [2024-10-13 14:35:27.912759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.480 qpair failed and we were unable to recover it. 00:39:24.480 [2024-10-13 14:35:27.913120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.913152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.913503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.913532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.913892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.913921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.914291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.914322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.914675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.914704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.915084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.915115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.915476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.915506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.915870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.915899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.916261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.916291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.916632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.916661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.916915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.916945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.917320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.917350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.917708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.917737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.918099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.918130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.918483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.918514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.918951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.918980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.919209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.919242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.919501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.919530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.919892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.919920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.920291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.920322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.920688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.920717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.921088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.921119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.921471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.921506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.921857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.481 [2024-10-13 14:35:27.921885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.481 qpair failed and we were unable to recover it. 00:39:24.481 [2024-10-13 14:35:27.922233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.922263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.922630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.922660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.923035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.923073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.923299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.923330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.923689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.923719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.924092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.924122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.924386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.924417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.924765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.924794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.925225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.925256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.925626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.925654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.926015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.926044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.926347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.926377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.926719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.926749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.927088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.927118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.927468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.927498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.927856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.927885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.928140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.928173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.928520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.928549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.928974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.929003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.929376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.929406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.929659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.929691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.929984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.930013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.482 [2024-10-13 14:35:27.930220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.482 [2024-10-13 14:35:27.930250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.482 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.930640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.930668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.931028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.931056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.931451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.931482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.931717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.931749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.932091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.932122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.932419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.932449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.932837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.932865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.933227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.933257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.933490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.933520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.933886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.933916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.934141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.934173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.934534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.934569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.934929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.934957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.935406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.935436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.935796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.935824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.936184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.936219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.936575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.936605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.936863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.936891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.937241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.937271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.937632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.937661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.938021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.938049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.938392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.938422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.938757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.938785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.939155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.939187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.939531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.939561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.939920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.939950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.483 qpair failed and we were unable to recover it. 00:39:24.483 [2024-10-13 14:35:27.940292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.483 [2024-10-13 14:35:27.940323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.940689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.940718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.941082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.941111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.941448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.941477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.941813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.941843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.942132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.942162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.942522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.942551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.942903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.942932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.943297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.943328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.943675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.943704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.944077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.944107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.944463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.944491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.944842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.944871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.945234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.945265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.945643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.945672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.946013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.946042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.946417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.946449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.946803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.946832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.947195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.947225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.947579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.947608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.947860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.947889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.948245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.948276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.948607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.948638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.949006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.949037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.949295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.949325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.949696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.949725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.950096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.950126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.950486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.950514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.950869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.950898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.951288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.951325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.951698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.951728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.484 [2024-10-13 14:35:27.952092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.484 [2024-10-13 14:35:27.952122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.484 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.952471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.952500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.952867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.952895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.953236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.953268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.953436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.953465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.953804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.953833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.954202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.954233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.954592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.954621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.954993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.955023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.955396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.955427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.955725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.955756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.956132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.956164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.956529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.956558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.956953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.956982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.957341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.957370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.957728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.957759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.958119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.958150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.958598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.958630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.958983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.959011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.959379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.959409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.959679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.959707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.959946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.959978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.960424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.960455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.960733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.960761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.961124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.961157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.485 qpair failed and we were unable to recover it. 00:39:24.485 [2024-10-13 14:35:27.961387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.485 [2024-10-13 14:35:27.961419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.961775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.961804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.962099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.962132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.962566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.962596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.962971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.962999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.963213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.963242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.963517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.963546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.963776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.963808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.964154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.964185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.964448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.964477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.964828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.964858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.965210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.965240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.965609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.965638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.966013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.966048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.966424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.966454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.966818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.966847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.967229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.967259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.967626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.967655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.967993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.968024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.968415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.968447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.968792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.968822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.969100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.969131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.969530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.969560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.969973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.970002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.486 [2024-10-13 14:35:27.970431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.486 [2024-10-13 14:35:27.970462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.486 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.970822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.970861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.971236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.971268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.971555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.971584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.971972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.972001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.972307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.972338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.972671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.972701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.972961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.972989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.973249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.973279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.973589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.973618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.973977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.974006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.974293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.974323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.974686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.974715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.975098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.975130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.975497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.975525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.975779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.975811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.976092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.976124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.976499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.976529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.976891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.976921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.977374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.977404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.487 [2024-10-13 14:35:27.977743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.487 [2024-10-13 14:35:27.977773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.487 qpair failed and we were unable to recover it. 00:39:24.488 [2024-10-13 14:35:27.978160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.488 [2024-10-13 14:35:27.978191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.488 qpair failed and we were unable to recover it. 00:39:24.488 [2024-10-13 14:35:27.978614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.488 [2024-10-13 14:35:27.978644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.488 qpair failed and we were unable to recover it. 00:39:24.488 [2024-10-13 14:35:27.978911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.488 [2024-10-13 14:35:27.978939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.488 qpair failed and we were unable to recover it. 00:39:24.488 [2024-10-13 14:35:27.979335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.488 [2024-10-13 14:35:27.979365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.488 qpair failed and we were unable to recover it. 00:39:24.488 [2024-10-13 14:35:27.979737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.488 [2024-10-13 14:35:27.979766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.488 qpair failed and we were unable to recover it. 00:39:24.488 [2024-10-13 14:35:27.980136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.488 [2024-10-13 14:35:27.980167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.488 qpair failed and we were unable to recover it. 00:39:24.488 [2024-10-13 14:35:27.980512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.488 [2024-10-13 14:35:27.980543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.488 qpair failed and we were unable to recover it. 00:39:24.488 [2024-10-13 14:35:27.980898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.488 [2024-10-13 14:35:27.980927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.488 qpair failed and we were unable to recover it. 00:39:24.488 [2024-10-13 14:35:27.981292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.488 [2024-10-13 14:35:27.981327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.488 qpair failed and we were unable to recover it. 00:39:24.488 [2024-10-13 14:35:27.981689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.488 [2024-10-13 14:35:27.981719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.488 qpair failed and we were unable to recover it. 00:39:24.488 [2024-10-13 14:35:27.982076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.488 [2024-10-13 14:35:27.982108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.488 qpair failed and we were unable to recover it. 00:39:24.488 [2024-10-13 14:35:27.982467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.488 [2024-10-13 14:35:27.982496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.489 qpair failed and we were unable to recover it. 00:39:24.489 [2024-10-13 14:35:27.982730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.489 [2024-10-13 14:35:27.982762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.489 qpair failed and we were unable to recover it. 00:39:24.489 [2024-10-13 14:35:27.983120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.489 [2024-10-13 14:35:27.983151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.489 qpair failed and we were unable to recover it. 00:39:24.489 [2024-10-13 14:35:27.983504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.489 [2024-10-13 14:35:27.983534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.489 qpair failed and we were unable to recover it. 00:39:24.489 [2024-10-13 14:35:27.983787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.489 [2024-10-13 14:35:27.983817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.489 qpair failed and we were unable to recover it. 00:39:24.489 [2024-10-13 14:35:27.984134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.489 [2024-10-13 14:35:27.984164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.489 qpair failed and we were unable to recover it. 00:39:24.489 [2024-10-13 14:35:27.984505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.489 [2024-10-13 14:35:27.984536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.489 qpair failed and we were unable to recover it. 00:39:24.489 [2024-10-13 14:35:27.984879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.489 [2024-10-13 14:35:27.984908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.489 qpair failed and we were unable to recover it. 00:39:24.489 [2024-10-13 14:35:27.985106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.489 [2024-10-13 14:35:27.985139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.489 qpair failed and we were unable to recover it. 00:39:24.489 [2024-10-13 14:35:27.985494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.489 [2024-10-13 14:35:27.985523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.489 qpair failed and we were unable to recover it. 00:39:24.489 [2024-10-13 14:35:27.985886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.489 [2024-10-13 14:35:27.985918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.489 qpair failed and we were unable to recover it. 00:39:24.489 [2024-10-13 14:35:27.986181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.489 [2024-10-13 14:35:27.986213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.489 qpair failed and we were unable to recover it. 00:39:24.489 [2024-10-13 14:35:27.986635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.489 [2024-10-13 14:35:27.986666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.489 qpair failed and we were unable to recover it. 00:39:24.489 [2024-10-13 14:35:27.987077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.489 [2024-10-13 14:35:27.987108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.489 qpair failed and we were unable to recover it. 00:39:24.489 [2024-10-13 14:35:27.987475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.489 [2024-10-13 14:35:27.987505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.489 qpair failed and we were unable to recover it. 00:39:24.489 [2024-10-13 14:35:27.987868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.489 [2024-10-13 14:35:27.987897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.489 qpair failed and we were unable to recover it. 00:39:24.489 [2024-10-13 14:35:27.988244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.489 [2024-10-13 14:35:27.988274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.489 qpair failed and we were unable to recover it. 00:39:24.489 [2024-10-13 14:35:27.988689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.489 [2024-10-13 14:35:27.988719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.489 qpair failed and we were unable to recover it. 00:39:24.489 [2024-10-13 14:35:27.989149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.489 [2024-10-13 14:35:27.989179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.490 qpair failed and we were unable to recover it. 00:39:24.490 [2024-10-13 14:35:27.989565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.490 [2024-10-13 14:35:27.989594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.490 qpair failed and we were unable to recover it. 00:39:24.490 [2024-10-13 14:35:27.989860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.490 [2024-10-13 14:35:27.989889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.490 qpair failed and we were unable to recover it. 00:39:24.490 [2024-10-13 14:35:27.990284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.490 [2024-10-13 14:35:27.990314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.490 qpair failed and we were unable to recover it. 00:39:24.490 [2024-10-13 14:35:27.990672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.490 [2024-10-13 14:35:27.990702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.490 qpair failed and we were unable to recover it. 00:39:24.490 [2024-10-13 14:35:27.991106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.490 [2024-10-13 14:35:27.991137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.490 qpair failed and we were unable to recover it. 00:39:24.490 [2024-10-13 14:35:27.991512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.490 [2024-10-13 14:35:27.991542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.490 qpair failed and we were unable to recover it. 00:39:24.490 [2024-10-13 14:35:27.991908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.490 [2024-10-13 14:35:27.991938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.490 qpair failed and we were unable to recover it. 00:39:24.490 [2024-10-13 14:35:27.992195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.490 [2024-10-13 14:35:27.992225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.491 qpair failed and we were unable to recover it. 00:39:24.491 [2024-10-13 14:35:27.992577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.491 [2024-10-13 14:35:27.992607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.491 qpair failed and we were unable to recover it. 00:39:24.491 [2024-10-13 14:35:27.992991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.491 [2024-10-13 14:35:27.993021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.491 qpair failed and we were unable to recover it. 00:39:24.491 [2024-10-13 14:35:27.993320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.491 [2024-10-13 14:35:27.993360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.491 qpair failed and we were unable to recover it. 00:39:24.491 [2024-10-13 14:35:27.993687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.491 [2024-10-13 14:35:27.993716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.491 qpair failed and we were unable to recover it. 00:39:24.491 [2024-10-13 14:35:27.993943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.491 [2024-10-13 14:35:27.993974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.491 qpair failed and we were unable to recover it. 00:39:24.491 [2024-10-13 14:35:27.994296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.491 [2024-10-13 14:35:27.994328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.491 qpair failed and we were unable to recover it. 00:39:24.491 [2024-10-13 14:35:27.994584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.491 [2024-10-13 14:35:27.994612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.492 qpair failed and we were unable to recover it. 00:39:24.492 [2024-10-13 14:35:27.994986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.492 [2024-10-13 14:35:27.995017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.492 qpair failed and we were unable to recover it. 00:39:24.492 [2024-10-13 14:35:27.995495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.492 [2024-10-13 14:35:27.995526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.492 qpair failed and we were unable to recover it. 00:39:24.492 [2024-10-13 14:35:27.995869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.492 [2024-10-13 14:35:27.995900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.492 qpair failed and we were unable to recover it. 00:39:24.492 [2024-10-13 14:35:27.996275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.492 [2024-10-13 14:35:27.996313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.492 qpair failed and we were unable to recover it. 00:39:24.492 [2024-10-13 14:35:27.996694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.492 [2024-10-13 14:35:27.996724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.492 qpair failed and we were unable to recover it. 00:39:24.492 [2024-10-13 14:35:27.996966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.492 [2024-10-13 14:35:27.996994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.492 qpair failed and we were unable to recover it. 00:39:24.493 [2024-10-13 14:35:27.997346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.493 [2024-10-13 14:35:27.997377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.493 qpair failed and we were unable to recover it. 00:39:24.493 [2024-10-13 14:35:27.997727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.493 [2024-10-13 14:35:27.997756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.493 qpair failed and we were unable to recover it. 00:39:24.493 [2024-10-13 14:35:27.998111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.493 [2024-10-13 14:35:27.998142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.493 qpair failed and we were unable to recover it. 00:39:24.493 [2024-10-13 14:35:27.998410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.493 [2024-10-13 14:35:27.998438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.493 qpair failed and we were unable to recover it. 00:39:24.493 [2024-10-13 14:35:27.998806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.493 [2024-10-13 14:35:27.998836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.493 qpair failed and we were unable to recover it. 00:39:24.493 [2024-10-13 14:35:27.999192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.493 [2024-10-13 14:35:27.999224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.493 qpair failed and we were unable to recover it. 00:39:24.493 [2024-10-13 14:35:27.999548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.493 [2024-10-13 14:35:27.999578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.493 qpair failed and we were unable to recover it. 00:39:24.493 [2024-10-13 14:35:27.999941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.493 [2024-10-13 14:35:27.999970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.493 qpair failed and we were unable to recover it. 00:39:24.493 [2024-10-13 14:35:28.000261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.493 [2024-10-13 14:35:28.000292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.493 qpair failed and we were unable to recover it. 00:39:24.493 [2024-10-13 14:35:28.000548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.493 [2024-10-13 14:35:28.000582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.494 qpair failed and we were unable to recover it. 00:39:24.494 [2024-10-13 14:35:28.000960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.494 [2024-10-13 14:35:28.000990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.494 qpair failed and we were unable to recover it. 00:39:24.494 [2024-10-13 14:35:28.001243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.494 [2024-10-13 14:35:28.001277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.494 qpair failed and we were unable to recover it. 00:39:24.494 [2024-10-13 14:35:28.001694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.494 [2024-10-13 14:35:28.001725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.494 qpair failed and we were unable to recover it. 00:39:24.494 [2024-10-13 14:35:28.002097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.494 [2024-10-13 14:35:28.002128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.494 qpair failed and we were unable to recover it. 00:39:24.494 [2024-10-13 14:35:28.002548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.494 [2024-10-13 14:35:28.002577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.494 qpair failed and we were unable to recover it. 00:39:24.494 [2024-10-13 14:35:28.003024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.494 [2024-10-13 14:35:28.003054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.494 qpair failed and we were unable to recover it. 00:39:24.494 [2024-10-13 14:35:28.003237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.494 [2024-10-13 14:35:28.003266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.494 qpair failed and we were unable to recover it. 00:39:24.494 [2024-10-13 14:35:28.003520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.494 [2024-10-13 14:35:28.003552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.494 qpair failed and we were unable to recover it. 00:39:24.494 [2024-10-13 14:35:28.003914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.494 [2024-10-13 14:35:28.003946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.494 qpair failed and we were unable to recover it. 00:39:24.494 [2024-10-13 14:35:28.004304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.494 [2024-10-13 14:35:28.004335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.494 qpair failed and we were unable to recover it. 00:39:24.494 [2024-10-13 14:35:28.004716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.494 [2024-10-13 14:35:28.004745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.494 qpair failed and we were unable to recover it. 00:39:24.495 [2024-10-13 14:35:28.005132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.495 [2024-10-13 14:35:28.005163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.495 qpair failed and we were unable to recover it. 00:39:24.495 [2024-10-13 14:35:28.005563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.495 [2024-10-13 14:35:28.005592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.495 qpair failed and we were unable to recover it. 00:39:24.495 [2024-10-13 14:35:28.005954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.495 [2024-10-13 14:35:28.005982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.495 qpair failed and we were unable to recover it. 00:39:24.495 [2024-10-13 14:35:28.006345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.495 [2024-10-13 14:35:28.006375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.495 qpair failed and we were unable to recover it. 00:39:24.495 [2024-10-13 14:35:28.006738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.495 [2024-10-13 14:35:28.006767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.495 qpair failed and we were unable to recover it. 00:39:24.495 [2024-10-13 14:35:28.007133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.495 [2024-10-13 14:35:28.007163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.495 qpair failed and we were unable to recover it. 00:39:24.495 [2024-10-13 14:35:28.007547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.496 [2024-10-13 14:35:28.007575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.496 qpair failed and we were unable to recover it. 00:39:24.496 [2024-10-13 14:35:28.007947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.496 [2024-10-13 14:35:28.007976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.496 qpair failed and we were unable to recover it. 00:39:24.496 [2024-10-13 14:35:28.008329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.496 [2024-10-13 14:35:28.008360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.496 qpair failed and we were unable to recover it. 00:39:24.496 [2024-10-13 14:35:28.008706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.496 [2024-10-13 14:35:28.008735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.496 qpair failed and we were unable to recover it. 00:39:24.496 [2024-10-13 14:35:28.009097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.496 [2024-10-13 14:35:28.009128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.496 qpair failed and we were unable to recover it. 00:39:24.496 [2024-10-13 14:35:28.009489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.496 [2024-10-13 14:35:28.009518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.496 qpair failed and we were unable to recover it. 00:39:24.496 [2024-10-13 14:35:28.009884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.496 [2024-10-13 14:35:28.009913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.496 qpair failed and we were unable to recover it. 00:39:24.496 [2024-10-13 14:35:28.010287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.496 [2024-10-13 14:35:28.010316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.496 qpair failed and we were unable to recover it. 00:39:24.496 [2024-10-13 14:35:28.010651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.496 [2024-10-13 14:35:28.010679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.496 qpair failed and we were unable to recover it. 00:39:24.496 [2024-10-13 14:35:28.011049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.497 [2024-10-13 14:35:28.011124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.497 qpair failed and we were unable to recover it. 00:39:24.497 [2024-10-13 14:35:28.011467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.497 [2024-10-13 14:35:28.011503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.497 qpair failed and we were unable to recover it. 00:39:24.497 [2024-10-13 14:35:28.011867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.497 [2024-10-13 14:35:28.011895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.497 qpair failed and we were unable to recover it. 00:39:24.497 [2024-10-13 14:35:28.012244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.497 [2024-10-13 14:35:28.012276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.497 qpair failed and we were unable to recover it. 00:39:24.497 [2024-10-13 14:35:28.012560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.497 [2024-10-13 14:35:28.012588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.497 qpair failed and we were unable to recover it. 00:39:24.497 [2024-10-13 14:35:28.012969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.497 [2024-10-13 14:35:28.012998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.497 qpair failed and we were unable to recover it. 00:39:24.497 [2024-10-13 14:35:28.013352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.497 [2024-10-13 14:35:28.013383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.497 qpair failed and we were unable to recover it. 00:39:24.497 [2024-10-13 14:35:28.013728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.497 [2024-10-13 14:35:28.013757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.497 qpair failed and we were unable to recover it. 00:39:24.497 [2024-10-13 14:35:28.014116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.497 [2024-10-13 14:35:28.014146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.497 qpair failed and we were unable to recover it. 00:39:24.497 [2024-10-13 14:35:28.014544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.497 [2024-10-13 14:35:28.014572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.497 qpair failed and we were unable to recover it. 00:39:24.497 [2024-10-13 14:35:28.014911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.497 [2024-10-13 14:35:28.014940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.497 qpair failed and we were unable to recover it. 00:39:24.497 [2024-10-13 14:35:28.015198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.015228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.015596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.015625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.015990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.016018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.016422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.016452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.016800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.016830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.017183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.017222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.017586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.017616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.017984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.018013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.018281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.018310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.018577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.018609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.018997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.019026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.019374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.019406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.019716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.019746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.020122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.020152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.020549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.020577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.020933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.020963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.021352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.021384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.021750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.021785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.022143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.022173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.022524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.022553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.022861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.022891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.023243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.023274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.023642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.023671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.024043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.024101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.024494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.024523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.024900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.024928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.025292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.025322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.025680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.025710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.026077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.026109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.026530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.026560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.026909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.026944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.027325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.027355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.027641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.027671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.028028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.028056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.028442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.028473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.028840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.028869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.029213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.029243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.029578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.029608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.029980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.498 [2024-10-13 14:35:28.030009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.498 qpair failed and we were unable to recover it. 00:39:24.498 [2024-10-13 14:35:28.030433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.030464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.030803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.030833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.031239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.031270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.031639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.031668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.032132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.032162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.032541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.032569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.032912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.032942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.033343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.033374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.033613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.033645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.034076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.034107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.034474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.034502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.034876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.034904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.035264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.035295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.035517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.035548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.035910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.035942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.036209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.036239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.036609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.036637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.036995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.037023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.037381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.037418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.037647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.037678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.038059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.038118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.038508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.038538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.038900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.038929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.039209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.039243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.039597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.039626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.039986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.040014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.040381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.040412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.040777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.040806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.041180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.041210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.041434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.041465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.041820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.041850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.042222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.042253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.042489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.042522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.042880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.042910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.043293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.043325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.043687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.043716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.043943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.043972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.044345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.044376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.044719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.044749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.045086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.045116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.045372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.045401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.045754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.045784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.046147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.046177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.046582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.046610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.046948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.046977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.047341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.047372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.047721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.047750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.048118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.048149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.048511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.048540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.048895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.048923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.049284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.049314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.049600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.049633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.049993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.050023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.050437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.050468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.050711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.050739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.051097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.051127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.051500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.499 [2024-10-13 14:35:28.051531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.499 qpair failed and we were unable to recover it. 00:39:24.499 [2024-10-13 14:35:28.051898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.051927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.052302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.052340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.052703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.052732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.053104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.053134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.053480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.053509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.053884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.053913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.054341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.054370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.054729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.054757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.055182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.055214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.055574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.055602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.055973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.056001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.056366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.056396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.056760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.056788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.057145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.057175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.057558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.057586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.057951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.057981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.058279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.058308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.058551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.058582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.058837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.058865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.059245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.059276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.059523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.059552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.059906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.059936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.060311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.060341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.060703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.060732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.061086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.061115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.061465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.061494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.061855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.061884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.062222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.062252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.062609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.062639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.063009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.063038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.063400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.063429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.063799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.063827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.064178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.064209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.064572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.064601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.064957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.064987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.065351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.065381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.065751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.065781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.066146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.066176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.066513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.066543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.066907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.066936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.067290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.067321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.067685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.067721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.067944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.067974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.068360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.068389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.068636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.068668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.069035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.069077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.069411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.069442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.069794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.069824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.070173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.070202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.070565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.070593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.070961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.070990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.071357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.071387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.071784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.071814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.072165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.072195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.072521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.072551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.072921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.072949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.073320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.073349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.073702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.073731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.074106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.074136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.074391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.074419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.074771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.074800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.075164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.500 [2024-10-13 14:35:28.075195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.500 qpair failed and we were unable to recover it. 00:39:24.500 [2024-10-13 14:35:28.075531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.075561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.075931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.075960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.076302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.076334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.076568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.076597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.076974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.077004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.077383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.077413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.077861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.077891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.078108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.078141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.078498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.078526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.078876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.078905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.079248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.079278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.079651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.079680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.080040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.080082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.080306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.080340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.080718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.080749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.081114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.081146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.081387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.081419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.081799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.081830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.082249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.082280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.082649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.082691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.083044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.083101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.083460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.083491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.083747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.083781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.084166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.084197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.084580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.084609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.084974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.085003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.085364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.085394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.085639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.085672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.086044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.086085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.086423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.086453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.086820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.086850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.087221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.087252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.087619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.087648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.088015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.088045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.088302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.088333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.088755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.088784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.089147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.089177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.089534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.089564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.089923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.089952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.090329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.090358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.090724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.090753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.091117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.091148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.091513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.091542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.091897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.091929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.092295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.092325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.092689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.092718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.093098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.093127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.093485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.093515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.093875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.093905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.094271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.094301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.094657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.094686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.095060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.095112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.095469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.095498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.095862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.095892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.096246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.096275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.096637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.096667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.097038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.097082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.097430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.097468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.097845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.501 [2024-10-13 14:35:28.097875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.501 qpair failed and we were unable to recover it. 00:39:24.501 [2024-10-13 14:35:28.098212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.098250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.098596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.098625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.098963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.098992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.099354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.099385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.099635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.099663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.100022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.100052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.100426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.100456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.100812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.100842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.101196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.101226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.101574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.101612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.102007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.102037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.102386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.102416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.102796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.102825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.103076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.103108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.103481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.103511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.103743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.103773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.104002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.104032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.104377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.104406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.104772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.104801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.105143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.105174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.105542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.105581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.105929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.105958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.106320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.106351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.106711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.106740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.107117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.107148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.107539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.107571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.107945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.107975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.108325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.108355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.108722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.108752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.109112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.109143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.109506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.109537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.109717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.109746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.110098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.110129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.110594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.110624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.110985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.111013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.111413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.111446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.111695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.111724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.112087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.112117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.112454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.112483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.112866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.112895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.113270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.113306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.113642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.113673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.114035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.114082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.114432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.114462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.114829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.114860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.115220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.115254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.115632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.115663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.116019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.116047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.116416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.116448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.116809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.116840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.117207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.117236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.117607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.117637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.117874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.117907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.118287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.118317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.118585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.118617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.118962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.118993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.119364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.119395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.119755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.119783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.120157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.120188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.120554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.120582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.120948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.120977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.121345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.121376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.121733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.121763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.122121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.122151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.122490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.122518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.122885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.122915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.123164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.502 [2024-10-13 14:35:28.123195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.502 qpair failed and we were unable to recover it. 00:39:24.502 [2024-10-13 14:35:28.123588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.123618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.123980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.124011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.124384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.124415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.124777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.124808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.125054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.125106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.125475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.125505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.125870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.125901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.126268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.126297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.126652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.126682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.127119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.127150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.127514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.127543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.127914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.127945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.128326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.128356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.128728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.128763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.129122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.129152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.129550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.129580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.129906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.129938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.130313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.130344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.130698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.130729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.131102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.131133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.131517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.131548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.131934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.131963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.132336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.132367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.132597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.132629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.132958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.132988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.133319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.133351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.133692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.133723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.134056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.134099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.134355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.134384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.134633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.134663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.135020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.135051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.135440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.135473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.135829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.135858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.136222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.136253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.136521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.136549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.136896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.136925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.137268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.137299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.137640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.137669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.138031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.138073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.138429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.138460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.138814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.138846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.139113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.139144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.139508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.139538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.139894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.139924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.140295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.140326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.140695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.140724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.141088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.141117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.141476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.141505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.141829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.141857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.142209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.142239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.142606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.142635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.143012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.143041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.143291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.143320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.143665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.143699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.144074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.144105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.144440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.144471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.144699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.144727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.145100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.145130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.145385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.145416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.145784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.145812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.146162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.146193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.146562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.146591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.146909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.146937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.147285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.147315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.147677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.147706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.148077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.148107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.148454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.148483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.148865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.148894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.503 qpair failed and we were unable to recover it. 00:39:24.503 [2024-10-13 14:35:28.149247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.503 [2024-10-13 14:35:28.149277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.149704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.149732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.149974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.150004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.150377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.150407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.150766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.150795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.151144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.151174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.151544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.151573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.151936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.151965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.152310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.152340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.152650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.152679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.153032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.153072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.153397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.153426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.153765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.153796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.154054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.154094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.154465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.154495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.154852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.154881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.155129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.155161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.155387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.155418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.155849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.155879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.156257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.156288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.156630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.156659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.156948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.156976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.157225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.157259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.157622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.157651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.157995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.158024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.158387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.158431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.158852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.158881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.159268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.159298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.159667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.159696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.159945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.159976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.160403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.160434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.504 [2024-10-13 14:35:28.160776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.504 [2024-10-13 14:35:28.160806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.504 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.161165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.161196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.161540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.161571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.161910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.161939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.162284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.162314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.162680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.162710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.163085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.163117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.163438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.163467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.163853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.163882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.164252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.164284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.164628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.164657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.165017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.165045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.165425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.165455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.165800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.165829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.166192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.166222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.166592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.166622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.166982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.167011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.167372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.167402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.167776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.167805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.168169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.168199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.168580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.168609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.168974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.169003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.169368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.169398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.169762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.169792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.170146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.170176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.170542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.170572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.170932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.170960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.171304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.171334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.171669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.171698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.172040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.172080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.172439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.172468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.172846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.172874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.173235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.173273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.173604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.173632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.173997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.174031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.174473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.174503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.174871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.174900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.175198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.175228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.175595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.175624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.175986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.778 [2024-10-13 14:35:28.176014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.778 qpair failed and we were unable to recover it. 00:39:24.778 [2024-10-13 14:35:28.176350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.176380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.176744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.176773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.177138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.177168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.177532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.177560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.177805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.177834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.178187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.178216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.178584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.178612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.178974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.179003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.179386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.179416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.179655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.179684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.179923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.179954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.180332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.180362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.180731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.180760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.181122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.181152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.181387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.181417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.181788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.181817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.182183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.182214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.182611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.182640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.183002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.183031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.183285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.183314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.183563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.183593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.183967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.183995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.184359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.184389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.184751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.184779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.185140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.185170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.185538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.185566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.186014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.186042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.186352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.186382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.186747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.186776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.187135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.187166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.187538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.187567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.187930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.187958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.188325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.188354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.188726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.188756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.189124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.189160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.189512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.189542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.189762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.189793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.190179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.190210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.190598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.190627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.190988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.191017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.191384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.191414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.191796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.191825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.192277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.192307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.192661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.192690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.192935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.192963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.193294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.193324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.193676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.193705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.194077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.194107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.194477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.194507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.194874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.194903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.195285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.195315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.195677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.195705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.196082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.196112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.779 qpair failed and we were unable to recover it. 00:39:24.779 [2024-10-13 14:35:28.196475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.779 [2024-10-13 14:35:28.196503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.196857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.196886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.197252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.197283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.197677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.197706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.198055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.198093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.198500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.198529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.198892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.198920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.199295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.199324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.199699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.199728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.200087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.200116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.200477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.200506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.200753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.200781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.201133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.201162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.201532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.201561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.201930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.201959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.202320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.202350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.202689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.202717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.203081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.203112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.203472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.203500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.203767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.203794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.204147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.204177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.204548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.204582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.204952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.204981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.205324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.205355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.205707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.205736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.206107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.206137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.206507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.206536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.206875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.206903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.207241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.207271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.207623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.207652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.208019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.208048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.208390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.208419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.208788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.208816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.209144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.209175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.209521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.209550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.209937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.209966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.210319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.210349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.210694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.210723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.211082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.211112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.211447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.211477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.211818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.211848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.212191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.212221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.212588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.212618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.212861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.212890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.213247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.213278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.213622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.213650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.213909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.213938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.214257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.214287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.214636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.214671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.214930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.214959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.215299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.215329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.215679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.215709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.215949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.215979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.216378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.780 [2024-10-13 14:35:28.216409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.780 qpair failed and we were unable to recover it. 00:39:24.780 [2024-10-13 14:35:28.216773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.216802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.217165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.217196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.217565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.217593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.217957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.217985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.218352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.218382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.218742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.218771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.219132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.219161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.219509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.219537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.219797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.219826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.220093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.220126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.220480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.220510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.220867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.220896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.221260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.221290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.221534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.221562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.221916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.221947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.222302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.222334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.222706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.222737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.223091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.223122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.223491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.223522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.223732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.223764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.224016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.224046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.224334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.224363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.224735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.224766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.225118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.225148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.225515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.225543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.225899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.225929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.226306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.226336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.226707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.226742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.227101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.227130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.227554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.227583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.227939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.227968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.228328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.228357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.228701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.228730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.228972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.229002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.229377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.229413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.229633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.229664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.230034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.230088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.230474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.230504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.230848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.230876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.231243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.231274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.231674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.231702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.231962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.231992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.232344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.232375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.232717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.232747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.233125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.233155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.233602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.233631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.781 [2024-10-13 14:35:28.233983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.781 [2024-10-13 14:35:28.234012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.781 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.234397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.234427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.234766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.234796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.235160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.235190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.235559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.235588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.235949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.235977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.236348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.236377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.236748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.236777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.237138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.237167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.237525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.237554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.237919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.237948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.238267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.238297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.238670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.238699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.239058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.239101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.239316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.239347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.239706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.239735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.240092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.240124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.240481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.240510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.240765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.240792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.241056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.241099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.241472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.241501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.241872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.241901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.242239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.242268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.242652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.242681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.243037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.243076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.243338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.243367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.243718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.243746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.244111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.244141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.244427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.244462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.244821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.244849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.245190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.245219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.245476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.245505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.245844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.245872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.246237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.246268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.246640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.246669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.247026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.247056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.247453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.247483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.247827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.247857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.248222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.248252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.248601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.248629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.248999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.249028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.249421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.249451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.249815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.249843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.250188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.250218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.250583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.250612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.250978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.251006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.251256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.251288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.251667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.251696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.252071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.252102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.252429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.252457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.252813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.252841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.253199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.253230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.253597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.253626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.253990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.254019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.782 [2024-10-13 14:35:28.254395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.782 [2024-10-13 14:35:28.254424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.782 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.254789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.254817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.255186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.255216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.255572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.255601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.255955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.255984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.256237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.256270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.256607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.256637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.257003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.257031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.257392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.257423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.257789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.257817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.258178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.258209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.258581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.258609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.258968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.258997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.259368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.259398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.259757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.259791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.260134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.260164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.260507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.260536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.260909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.260937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.261381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.261411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.261772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.261801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.262173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.262202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.262594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.262623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.262980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.263009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.263378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.263407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.263764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.263793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.264149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.264180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.264566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.264595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.264957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.264986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.265370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.265399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.265731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.265760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.266120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.266150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.266589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.266618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.266979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.267009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.267426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.267457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.267632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.267664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.268083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.268114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.268467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.268495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.268742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.268773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.269126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.269156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.269536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.269564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.269929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.269957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.270385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.270415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.270809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.270838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.271192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.271222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.271558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.271586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.271864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.271893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.272245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.272276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.272639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.272668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.273029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.273057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.273390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.273419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.273778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.273807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.274058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.274097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.274495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.274524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.274893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.783 [2024-10-13 14:35:28.274921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.783 qpair failed and we were unable to recover it. 00:39:24.783 [2024-10-13 14:35:28.275267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.275302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.275646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.275675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.276039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.276076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.276436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.276466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.276810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.276839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.277265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.277295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.277551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.277581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.277938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.277967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.278393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.278423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.278763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.278792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.279163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.279193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.279380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.279411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.279773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.279803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.280167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.280197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.280539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.280568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.280921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.280950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.281301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.281332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.281732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.281761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.282118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.282149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.282518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.282547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.282911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.282939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.283389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.283419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.283683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.283711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.284082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.284113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.284472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.284500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.284866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.284894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.285271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.285300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.285685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.285714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.285964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.285992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.286234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.286264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.286618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.286648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.287013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.287042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.287388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.287419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.287666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.287696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.288036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.288078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.288415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.288445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.288813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.288841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.289210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.289240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.289481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.289510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.289871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.289899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.290152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.290189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.290434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.290461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.290822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.290850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.291121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.291150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.291407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.291436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.291823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.291851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.292218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.292248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.784 [2024-10-13 14:35:28.292506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.784 [2024-10-13 14:35:28.292534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.784 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.292919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.292949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.293315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.293345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.293713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.293742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.294108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.294138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.294500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.294528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.294881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.294910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.295280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.295310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.295658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.295687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.296029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.296060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.296431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.296460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.296706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.296737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.297107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.297138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.297474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.297506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.297872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.297901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.298260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.298291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.298652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.298681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.299037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.299077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.299438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.299466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.299716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.299747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.300106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.300136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.300522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.300550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.300929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.300958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.301303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.301333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.301697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.301725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.302087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.302117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.302482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.302512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.302872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.302901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.303269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.303298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.303662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.303691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.304050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.304087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.304414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.304443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.304818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.304846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.305094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.305129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.305502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.305531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.305748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.305778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.306142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.306172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.306545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.306574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.306941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.306969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.307338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.307368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.307613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.307645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.308002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.308032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.308377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.308408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.308777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.308807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.309153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.309183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.309543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.309581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.309984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.310012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.310393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.310423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.310764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.310795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.311169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.311198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.311532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.311561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.311925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.311954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.312336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.312365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.312725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.785 [2024-10-13 14:35:28.312753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.785 qpair failed and we were unable to recover it. 00:39:24.785 [2024-10-13 14:35:28.313008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.313040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.313396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.313425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.313794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.313823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.314199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.314230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.314684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.314713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.315084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.315114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.315485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.315515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.315747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.315778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.316145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.316175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.316514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.316543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.316905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.316935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.317289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.317318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.317685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.317714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.318082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.318111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.318471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.318500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.318767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.318796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.319177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.319206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.319583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.319612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.319949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.319977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.320336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.320372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.320736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.320766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.321128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.321157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.321537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.321566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.321940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.321970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.322345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.322375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.322739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.322768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.323130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.323160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.323528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.323557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.323924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.323953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.324303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.324332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.324698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.324728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.325085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.325115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.325487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.325516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.325804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.325833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.326202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.326233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.326605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.326634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.326883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.326913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.327292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.327323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.327671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.327701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.328036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.328089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.328473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.328502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.328852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.328881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.329229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.329258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.329633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.329662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.330006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.330035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.330462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.330493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.330861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.330892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.331242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.331275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.331610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.331646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.332054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.332096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.332493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.332521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.332887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.332915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.333292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.333322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.786 [2024-10-13 14:35:28.333682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.786 [2024-10-13 14:35:28.333710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.786 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.334083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.334114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.334559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.334590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.334951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.334981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.335348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.335377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.335735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.335765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.336126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.336163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.336537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.336567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.336935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.336965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.337320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.337351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.337664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.337694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.338082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.338112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.338506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.338535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.338898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.338927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.339197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.339227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.339485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.339517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.339869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.339898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.340249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.340280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.340679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.340708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.340932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.340964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.341317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.341349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.341713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.341743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.342119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.342149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.342510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.342541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.342922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.342951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.343326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.343357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.343779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.343807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.344168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.344200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.344561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.344590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.344959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.344990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.345424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.345454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.345883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.345912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.346377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.346408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.346699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.346729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.347103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.347135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.347585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.347614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.347864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.347892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.348244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.348275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.348649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.348678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.349046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.349088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.349449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.349480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.349841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.349870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.350229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.350259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.350603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.350633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.350972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.351001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.351265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.351294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.351653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.351689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.352058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.352116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.352473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.352504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.352851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.352882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.353131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.353161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.353581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.787 [2024-10-13 14:35:28.353612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.787 qpair failed and we were unable to recover it. 00:39:24.787 [2024-10-13 14:35:28.353955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.353983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.354323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.354355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.354706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.354746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.355093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.355124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.355489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.355521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.355867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.355896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.356271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.356302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.356663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.356692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.357032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.357074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.357501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.357530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.357886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.357916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.358278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.358310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.358669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.358699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.359039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.359081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.359436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.359466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.359831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.359860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.360226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.360255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.360616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.360646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.360985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.361014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.361380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.361410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.361772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.361802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.362170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.362202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.362567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.362597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.362969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.362997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.363369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.363400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.363763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.363793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.364171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.364201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.364569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.364598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.364967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.364997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.365371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.365403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.365746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.365784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.366037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.366075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.366353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.366384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.366732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.366761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.367103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.367139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.367508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.367538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.367918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.367950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.368326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.368357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.368729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.368761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.369124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.369157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.369530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.369559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.369920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.369950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.370291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.370323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.370674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.370704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.371056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.371099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.371428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.371459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.371827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.371856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.372221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.372252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.372618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.372648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.373009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.373038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.373411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.788 [2024-10-13 14:35:28.373440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.788 qpair failed and we were unable to recover it. 00:39:24.788 [2024-10-13 14:35:28.373813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.373843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.374204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.374233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.374591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.374620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.374924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.374953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.375198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.375230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.375564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.375595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.375826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.375859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.376109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.376143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.376392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.376424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.376771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.376800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.377057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.377100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.377468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.377497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.377873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.377905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.378244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.378274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.378625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.378656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.379024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.379053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.379502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.379533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.379980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.380012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.380434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.380466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.380835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.380865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.381237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.381267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.381626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.381655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.382027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.382055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.382437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.382472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.382801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.382830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.383208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.383240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.383600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.383629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.383986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.384018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.384370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.384403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.384768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.384798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.385162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.385194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.385535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.385564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.385926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.385955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.386311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.386342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.386683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.386712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.387102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.387132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.387491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.387521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.387880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.387909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.388141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.388174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.388525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.388555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.388962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.388990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.389395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.389425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.389773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.389803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.390172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.390201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.390567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.390595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.390952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.390980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.391430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.391460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.391809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.391839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.392179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.392208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.392581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.392609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.392973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.393001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.393369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.393399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.393760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.393789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.394147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.789 [2024-10-13 14:35:28.394175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.789 qpair failed and we were unable to recover it. 00:39:24.789 [2024-10-13 14:35:28.394502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.394531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.394919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.394947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.395309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.395345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.395705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.395733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.395992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.396021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.396302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.396332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.396690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.396720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.397082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.397113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.397466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.397494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.397872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.397907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.398243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.398273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.398636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.398665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.399026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.399055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.399415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.399447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.399703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.399732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.400083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.400115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.400466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.400495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.400861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.400891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.401240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.401270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.401621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.401650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.402021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.402049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.402408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.402437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.402798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.402826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.403203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.403233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.403588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.403617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.403981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.404010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.404381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.404410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.404696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.404725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.405085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.405116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.405467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.405496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.405853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.405882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.406247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.406278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.406637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.406666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.407035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.407071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.407434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.407463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.407824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.407853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.408212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.408242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.408473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.408506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.408871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.408901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.409268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.409298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.409609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.409639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.410007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.410036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.410459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.410488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.410739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.410769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.411126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.411156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.411501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.411531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.411894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.411923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.412290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.412320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.412685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.412714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.413087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.413117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.413466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.413495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.413859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.413887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.414151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.414180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.414407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.790 [2024-10-13 14:35:28.414438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.790 qpair failed and we were unable to recover it. 00:39:24.790 [2024-10-13 14:35:28.414806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.414834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.415193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.415222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.415447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.415479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.415835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.415864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.416033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.416082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.416508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.416538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.416881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.416910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.417284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.417314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.417671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.417699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.418073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.418102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.418468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.418496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.418861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.418889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.419363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.419392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.419746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.419776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.420141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.420171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.420512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.420540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.420880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.420909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.421271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.421301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.421676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.421704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.422082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.422111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.422466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.422494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.422876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.422904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.423286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.423323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.423752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.423781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.424142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.424172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.424523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.424551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.424926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.424954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.425316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.425346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.425586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.425615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.425956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.425986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.426343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.426373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.426739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.426768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.427136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.427165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.427584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.427613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.427941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.427971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.428308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.428338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.428583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.428613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.429056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.429114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.429472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.429501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.429876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.429904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.430244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.430273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.430643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.430672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.431032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.431061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.431425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.431454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.431885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.431913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.432276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.432306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.432670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.432699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.433075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.791 [2024-10-13 14:35:28.433106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.791 qpair failed and we were unable to recover it. 00:39:24.791 [2024-10-13 14:35:28.433270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.433302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.433692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.433721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.434091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.434124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.434394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.434423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.434792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.434821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.435153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.435182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.435547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.435577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.435939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.435968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.436201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.436233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.436460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.436492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.436877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.436907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.437270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.437299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.437658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.437687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.438046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.438084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.438443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.438478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.438836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.438865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.439244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.439274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.439635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.439663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.440036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.440088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.440446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.440475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.440835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.440863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.441117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.441146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.441520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.441549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.441788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.441819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.442225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.442256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.442617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.442645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.442984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.443013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.443420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.443449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.443813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.443841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.444201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.444230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.444529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.444558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.444903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.444933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.445305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.445335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.445701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.445731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.446097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.446127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.446492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.446521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.446900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.446929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.447317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.447346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.447710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.447738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.448119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.448149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.448514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.448543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.448916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.448946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.449301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.449331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.449581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.449610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.449959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.449987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.450324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.450355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.450714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.450743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.451109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.451140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.451427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.451456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.451709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.451738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.452098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.452128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.452400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.452429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.452793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.452822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.792 qpair failed and we were unable to recover it. 00:39:24.792 [2024-10-13 14:35:28.453195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.792 [2024-10-13 14:35:28.453224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.453585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.453620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.453973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.454002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.454353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.454383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.454745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.454774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.455138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.455192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.455586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.455616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.455967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.455995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.456257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.456287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.456642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.456669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.457031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.457059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.457425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.457454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.457810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.457839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.458201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.458232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.458605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.458633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.458882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.458912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.459250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.459279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.459629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.459657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.459981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.460012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.460362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.460392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.460808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.460837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.461198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.461228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.461582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.461610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.461973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.462001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.462371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.462402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.462764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.462793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.463143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.463173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.463542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.463572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.463812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.463844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.464230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.464260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.464493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.464524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.464931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.464960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.465217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.465249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.465690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.465720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.465975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.466005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.466412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.466443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.466790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.466827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.467085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.467118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.467479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.467508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.467861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.467891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.468252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.468282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.468632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.468667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.469026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.469055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.469437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.469467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.469672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.469703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.470058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.470098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.470455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.470485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.470845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.470874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:24.793 [2024-10-13 14:35:28.471294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:24.793 [2024-10-13 14:35:28.471324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:24.793 qpair failed and we were unable to recover it. 00:39:25.067 [2024-10-13 14:35:28.471662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.067 [2024-10-13 14:35:28.471695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.067 qpair failed and we were unable to recover it. 00:39:25.067 [2024-10-13 14:35:28.472044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.067 [2024-10-13 14:35:28.472100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.067 qpair failed and we were unable to recover it. 00:39:25.067 [2024-10-13 14:35:28.472251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.067 [2024-10-13 14:35:28.472284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.067 qpair failed and we were unable to recover it. 00:39:25.067 [2024-10-13 14:35:28.472509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.067 [2024-10-13 14:35:28.472540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.067 qpair failed and we were unable to recover it. 00:39:25.067 [2024-10-13 14:35:28.472905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.067 [2024-10-13 14:35:28.472934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.067 qpair failed and we were unable to recover it. 00:39:25.067 [2024-10-13 14:35:28.473273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.067 [2024-10-13 14:35:28.473303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.067 qpair failed and we were unable to recover it. 00:39:25.067 [2024-10-13 14:35:28.473660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.067 [2024-10-13 14:35:28.473690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.067 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.474048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.474087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.474454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.474483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.474848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.474877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.475249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.475279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.475697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.475726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.476098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.476129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.476483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.476512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.476875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.476904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.477257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.477287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.477654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.477682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.478053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.478091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.478441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.478470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.478717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.478747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.479094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.479133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.479537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.479565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.479917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.479946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.480200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.480230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.480615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.480643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.481003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.481032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.481418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.481448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.481804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.481832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.482191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.482220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.482577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.482606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.482968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.482997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.483244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.483276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.483639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.483673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.484037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.484079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.484420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.484450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.484814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.484842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.485207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.485238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.485606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.485635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.485996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.486025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.486392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.486423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.486675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.486705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.487057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.487098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.487349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.487378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.487741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.487770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.488134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.488170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.488408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.488437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.488692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.488721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.489081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.489110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.489394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.489422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.489766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.489794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.490171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.490201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.490629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.490658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.068 qpair failed and we were unable to recover it. 00:39:25.068 [2024-10-13 14:35:28.491012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.068 [2024-10-13 14:35:28.491039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.491295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.491327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.491671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.491701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.491917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.491945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.492312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.492343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.492565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.492597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.492821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.492849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.493234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.493265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.493608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.493637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.493884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.493915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.494287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.494318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.494552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.494584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.494947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.494978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.495226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.495257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.495509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.495540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.495895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.495925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.496296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.496326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.496695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.496725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.497077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.497116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.497446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.497476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.497906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.497942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.498295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.498326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.498690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.498720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.499090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.499121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.499515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.499544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.499983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.500014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.500404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.500434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.500797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.500825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.501181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.501211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.501610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.501639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.501987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.502015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.502363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.502392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.502756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.502785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.503144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.503173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.503522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.503551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.503916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.503946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.504292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.504322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.504691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.504720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.505100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.505129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.505528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.505557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.505916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.505944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.506316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.506345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.506729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.506758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.507107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.507136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.507500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.507529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.507899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.507928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.508172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.069 [2024-10-13 14:35:28.508204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.069 qpair failed and we were unable to recover it. 00:39:25.069 [2024-10-13 14:35:28.508576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.508606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.508873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.508901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.509261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.509291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.509645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.509674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.510048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.510097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.510455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.510485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.510854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.510883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.511242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.511273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.511632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.511661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.511912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.511944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.512322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.512351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.512716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.512746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.513107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.513136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.513517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.513551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.513886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.513914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.514268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.514298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.514659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.514687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.515071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.515101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.515455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.515483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.515814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.515842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.516208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.516237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.516602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.516629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.516984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.517012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.517369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.517400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.517745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.517773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.518025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.518056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.518455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.518485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.518845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.518874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.519248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.519278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.519637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.519666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.520042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.520080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.520435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.520464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.520823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.520853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.521217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.521246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.521465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.521495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.521838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.521867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.522235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.522265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.522629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.522657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.523000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.523029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.523387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.523416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.523675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.523704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.524045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.524084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.524424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.524453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.524823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.524852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.525217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.525245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.525493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.525522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.525861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.070 [2024-10-13 14:35:28.525890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.070 qpair failed and we were unable to recover it. 00:39:25.070 [2024-10-13 14:35:28.526256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.526286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.526646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.526675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.527040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.527080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.527406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.527436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.527808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.527837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.528200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.528231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.528634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.528668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.529009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.529037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.529394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.529424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.529795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.529825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.530190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.530220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.530576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.530604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.530966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.530995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.531362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.531392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.531751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.531780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.532131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.532159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.532373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.532404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.532797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.532826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.533196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.533225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.533496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.533524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.533895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.533924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.534287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.534317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.534543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.534574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.534942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.534972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.535313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.535344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.535689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.535717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.535961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.535992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.536360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.536390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.536750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.536779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.537017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.537044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.537420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.537449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.537813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.537843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.538199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.538229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.538600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.538629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.539001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.539029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.539428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.539458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.539859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.539888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.540246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.540275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.540646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.540674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.540940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.540968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.541224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.541257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.541603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.071 [2024-10-13 14:35:28.541632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.071 qpair failed and we were unable to recover it. 00:39:25.071 [2024-10-13 14:35:28.541984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.542013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.542254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.542285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.542692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.542720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.542965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.542993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.543356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.543393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.543805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.543834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.544075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.544108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.544229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.544260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.544625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.544654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.545014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.545043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.545225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.545254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.545664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.545693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.546006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.546035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.546408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.546438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.546806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.546835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.547180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.547212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.547559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.547588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.547927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.547956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.548305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.548337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.548670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.548699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.549060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.549099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.549460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.549489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.549857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.549886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.550227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.550258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.550631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.550660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.550924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.550953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.551187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.551218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.551596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.551625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.551986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.552014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.552284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.552314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.552662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.552691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.553049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.553088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.553471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.553501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.553742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.553771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.554078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.554108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.554446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.554474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.554849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.554877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.555234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.555264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.555618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.555646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.555984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.556014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.556358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.556389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.556640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.556671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.557017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.557047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.557387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.557419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.557772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.557807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.558153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.558184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.072 [2024-10-13 14:35:28.558547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.072 [2024-10-13 14:35:28.558575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.072 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.559005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.559034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.559405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.559434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.559815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.559844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.560208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.560238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.560596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.560624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.560992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.561021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.561382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.561413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.561769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.561798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.562148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.562178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.562525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.562554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.562843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.562872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.563246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.563277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.563637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.563666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.564035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.564073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.564435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.564464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.564826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.564854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.565097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.565128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.565481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.565510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.565876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.565905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.566244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.566274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.566644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.566673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.567039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.567076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.567429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.567457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.567812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.567840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.568212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.568242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.568610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.568640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.569011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.569040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.569412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.569443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.569813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.569842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.569971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.570002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.570387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.570417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.570759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.570789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.571137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.571167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.571502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.571539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.571905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.571934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.572286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.572317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.572673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.572702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.573074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.573133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.573494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.573523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.573869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.573898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.574256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.574287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.574650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.574679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.575113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.575143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.575496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.575525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.575896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.575923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.576287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.073 [2024-10-13 14:35:28.576317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.073 qpair failed and we were unable to recover it. 00:39:25.073 [2024-10-13 14:35:28.576691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.576720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.577083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.577111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.577513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.577542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.577902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.577931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.578305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.578334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.578708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.578737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.579136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.579166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.579526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.579555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.579907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.579937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.580286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.580315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.580677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.580705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.581120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.581151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.581573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.581603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.581967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.581998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.582363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.582395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.582759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.582790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.583203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.583234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.583599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.583634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.583987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.584016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.584387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.584419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.584771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.584800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.585188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.585218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.585597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.585627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.585987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.586015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.586268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.586300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.586526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.586558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.586926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.586956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.587213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.587244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.587486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.587516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.587883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.587913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.588125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.588156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.588574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.588611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.588858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.588889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.589128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.589161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.589505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.589534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.589906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.589935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.590304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.590336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.590679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.590709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.591085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.591116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.591465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.591495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.591862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.591891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.592247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.592277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.592577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.592608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.592946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.592975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.593328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.593359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.593702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.593732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.594113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.074 [2024-10-13 14:35:28.594145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.074 qpair failed and we were unable to recover it. 00:39:25.074 [2024-10-13 14:35:28.594432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.594462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.594837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.594866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.595226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.595258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.595500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.595531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.595904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.595933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.596311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.596340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.596692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.596723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.597090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.597121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.597517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.597545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.597910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.597939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.598191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.598220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.598465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.598497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.598875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.598908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.599279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.599308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.599684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.599713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.600084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.600116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.600341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.600373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.600727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.600756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.601120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.601149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.601528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.601560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.601919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.601947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.602286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.602318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.602658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.602687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.603035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.603092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.603464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.603493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.603856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.603888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.604245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.604277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.604640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.604669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.605044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.605084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.605339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.605371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.605739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.605769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.606118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.606148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.606492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.606528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.606875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.606905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.607284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.607315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.607679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.607707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.608083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.608115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.608515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.608544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.608888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.608919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.609281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.609311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.609671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.609700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.075 [2024-10-13 14:35:28.610081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.075 [2024-10-13 14:35:28.610111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.075 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.610478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.610508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.610875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.610905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.611281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.611312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.611646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.611676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.612036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.612082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.612439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.612468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.612813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.612842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.613184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.613215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.613555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.613584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.613837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.613883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.614231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.614261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.614624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.614655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.615025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.615054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.615260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.615295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.615714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.615746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.616095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.616126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.616372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.616404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.616772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.616802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.617023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.617056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.617432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.617465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.617737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.617766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.618011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.618044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.618420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.618451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.618880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.618911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.619239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.619272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.619632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.619661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.620071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.620103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.620471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.620500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.620866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.620895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.621252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.621284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.621679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.621710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.622079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.622111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.622458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.622487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.622856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.622885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.623257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.623287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.623719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.623750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.624119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.624148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.624507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.624536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.624902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.624931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.625325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.625355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.625704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.625743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.626087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.626118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.626389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.626418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.626790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.626819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.627189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.627220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.627581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.627610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.076 [2024-10-13 14:35:28.627983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.076 [2024-10-13 14:35:28.628012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.076 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.628258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.628291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.628670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.628701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.629107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.629145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.629492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.629520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.629882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.629911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.630278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.630311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.630678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.630708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.630942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.630976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.631328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.631359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.631714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.631745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.632112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.632143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.632534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.632564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.632802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.632832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.633194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.633225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.633572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.633603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.633971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.634002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.634394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.634425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.634797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.634826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.635195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.635225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.635525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.635555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.635788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.635817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.636179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.636210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.636550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.636581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.636874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.636903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.637292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.637322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.637686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.637716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.637961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.637991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.638245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.638275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.638629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.638658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.639021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.639053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.639445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.639476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.639844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.639873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.640232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.640263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.640614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.640645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.640981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.641012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.641381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.641411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.641770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.641801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.642157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.642189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.642528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.642558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.642921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.642950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.643303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.643333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.643694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.643724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.644095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.644133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.644488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.644519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.644871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.644900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.645287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.645319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.645698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.077 [2024-10-13 14:35:28.645727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.077 qpair failed and we were unable to recover it. 00:39:25.077 [2024-10-13 14:35:28.646012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.646042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.646312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.646342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.646689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.646719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.647086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.647117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.647490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.647522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.647872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.647901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.648239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.648270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.648515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.648546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.648898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.648928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.649164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.649195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.649556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.649586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.649964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.649994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.650341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.650371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.650701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.650732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.651082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.651112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.651476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.651506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.651770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.651799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.652200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.652231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.652601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.652632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.653016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.653045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.653205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.653238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.653555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.653584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.653956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.653987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.654326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.654356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.654695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.654724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.655090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.655122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.655508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.655537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.655920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.655949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.656327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.656360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.656702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.656733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.657025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.657054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.657432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.657462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.657791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.657821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.658169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.658199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.658562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.658590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.658956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.658991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.659329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.659358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.659665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.659693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.660082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.660112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.660373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.660401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.660758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.660787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.661150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.661180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.078 [2024-10-13 14:35:28.661630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.078 [2024-10-13 14:35:28.661658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.078 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.662103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.662133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.662503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.662532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.662905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.662934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.663331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.663361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.663708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.663738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.664075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.664105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.664459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.664490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.664870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.664899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.665240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.665272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.665618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.665646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.666059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.666113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.666449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.666479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.666724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.666754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.667105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.667135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.667538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.667567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.667914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.667943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.668229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.668258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.668624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.668653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.669016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.669044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.669450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.669479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.669833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.669861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.670235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.670265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.670633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.670661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.670997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.671026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.671404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.671434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.671768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.671796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.672161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.672191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.672592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.672621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.672828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.672859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.673242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.673273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.673609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.673638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.673965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.673993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.674359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.674394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.674640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.674668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.675041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.675093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.675445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.675474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.675842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.675871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.676212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.676241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.676593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.676622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.676992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.677021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.677292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.677321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.677699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.677728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.678086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.678116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.678484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.678513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.678886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.678915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.679284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.679314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.079 qpair failed and we were unable to recover it. 00:39:25.079 [2024-10-13 14:35:28.679678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.079 [2024-10-13 14:35:28.679708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.680088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.680118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.680490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.680518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.680882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.680910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.681247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.681279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.681625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.681653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.682011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.682040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.682412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.682442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.682673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.682705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.683090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.683121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.683467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.683495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.683861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.683889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.684258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.684288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.684638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.684667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.685034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.685073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.685440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.685469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.685839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.685868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.686211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.686241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.686620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.686649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.687006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.687035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.687265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.687295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.687572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.687600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.687931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.687960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.688342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.688372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.688721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.688749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.689174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.689204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.689561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.689601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.689956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.689984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.690422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.690452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.690812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.690841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.691083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.691116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.691476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.691505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.691852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.691882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.692241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.692270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.692504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.692536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.692917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.692945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.693172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.693203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.693541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.693571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.693797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.693827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.694073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.694106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.694487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.694517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.694872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.694901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.695293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.695324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.695704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.695733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.696082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.696112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.696357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.696385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.696748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.696777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.080 qpair failed and we were unable to recover it. 00:39:25.080 [2024-10-13 14:35:28.697149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.080 [2024-10-13 14:35:28.697180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.697409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.697441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.697829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.697859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.698162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.698191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.698431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.698463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.698844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.698873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.699129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.699159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.699520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.699549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.699852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.699889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.700267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.700296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.700640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.700668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.701041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.701079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.701442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.701472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.701820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.701849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.702209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.702240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.702603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.702631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.702987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.703016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.703371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.703400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.703768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.703798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.704171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.704207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.704510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.704538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.704886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.704915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.705244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.705275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.705635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.705664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.706033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.706071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.706432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.706460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.706810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.706838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.707207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.707237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.707599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.707628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.707991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.708021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.708381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.708411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.708774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.708803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.709162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.709192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.709553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.709583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.709946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.709975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.710335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.710367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.710710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.710739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.711119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.711149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.711531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.711560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.711931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.711960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.712307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.712337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.712687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.712717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.713084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.713115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.713478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.713507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.713870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.713899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.714261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.714291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.714633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.714663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.081 [2024-10-13 14:35:28.715004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.081 [2024-10-13 14:35:28.715034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.081 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.715425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.715456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.715824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.715853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.716222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.716252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.716643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.716672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.716927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.716955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.717319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.717349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.717715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.717744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.717971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.718002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.718392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.718422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.718788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.718817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.719177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.719206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.719562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.719596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.719955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.719985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.720235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.720264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.720659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.720688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.721071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.721102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.721494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.721523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.721866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.721896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.722286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.722317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.722675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.722703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.723080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.723110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.723461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.723490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.723722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.723751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.724000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.724029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.724438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.724468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.724834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.724862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.725237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.725267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.725632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.725661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.726021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.726050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.726427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.726456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.726823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.726851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.727186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.727215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.727583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.727613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.727981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.728009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.728368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.728398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.728624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.728656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.729039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.729092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.729494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.729523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.729750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.729781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.730140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.730170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.730515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.730545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.730959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.082 [2024-10-13 14:35:28.730988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.082 qpair failed and we were unable to recover it. 00:39:25.082 [2024-10-13 14:35:28.731315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.731346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.731680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.731709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.732075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.732105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.732461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.732489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.732856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.732885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.733179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.733209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.733600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.733629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.733994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.734022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.734389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.734426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.734785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.734820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.735156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.735187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.735542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.735571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.735933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.735962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.736323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.736352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.736714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.736744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.737112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.737141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.737513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.737541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.737879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.737908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.738246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.738276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.738674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.738703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.738941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.738969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.739325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.739355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.739726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.739756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.740154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.740184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.740530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.740559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.740930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.740959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.741305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.741335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.741679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.741708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.742080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.742110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.742492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.742520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.742899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.742929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.743165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.743198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.743558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.743588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.743719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.743753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.744121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.744151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.744504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.744533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.744923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.744953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.745210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.745239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.745604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.745632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.745990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.746020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.746385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.746414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.746782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.746810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.747177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.747208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.747559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.747587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.747892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.747920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.748286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.748315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.748646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.083 [2024-10-13 14:35:28.748676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.083 qpair failed and we were unable to recover it. 00:39:25.083 [2024-10-13 14:35:28.749036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.749073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.749427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.749456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.749699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.749737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.750132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.750162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.750377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.750406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.750774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.750802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.751161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.751192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.751568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.751596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.751962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.751991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.752353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.752382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.752738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.752767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.753130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.753159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.753515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.753543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.753910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.753939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.754315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.754344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.754721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.754750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.755119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.755150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.755520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.755548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.755900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.755930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.756174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.756206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.756578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.756607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.756964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.756992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.757352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.757382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.757751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.757780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.758183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.758214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.758572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.758600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.758966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.758995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.759366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.759396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.759776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.759805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.760177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.760206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.084 [2024-10-13 14:35:28.760556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.084 [2024-10-13 14:35:28.760585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.084 qpair failed and we were unable to recover it. 00:39:25.358 [2024-10-13 14:35:28.760954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.358 [2024-10-13 14:35:28.760985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.358 qpair failed and we were unable to recover it. 00:39:25.358 [2024-10-13 14:35:28.761345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.358 [2024-10-13 14:35:28.761376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.358 qpair failed and we were unable to recover it. 00:39:25.358 [2024-10-13 14:35:28.761663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.358 [2024-10-13 14:35:28.761692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.358 qpair failed and we were unable to recover it. 00:39:25.358 [2024-10-13 14:35:28.762079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.358 [2024-10-13 14:35:28.762109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.358 qpair failed and we were unable to recover it. 00:39:25.358 [2024-10-13 14:35:28.762486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.358 [2024-10-13 14:35:28.762515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.358 qpair failed and we were unable to recover it. 00:39:25.358 [2024-10-13 14:35:28.762859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.358 [2024-10-13 14:35:28.762887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.358 qpair failed and we were unable to recover it. 00:39:25.358 [2024-10-13 14:35:28.763250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.358 [2024-10-13 14:35:28.763280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.358 qpair failed and we were unable to recover it. 00:39:25.358 [2024-10-13 14:35:28.763647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.358 [2024-10-13 14:35:28.763677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.358 qpair failed and we were unable to recover it. 00:39:25.358 [2024-10-13 14:35:28.764047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.358 [2024-10-13 14:35:28.764094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.358 qpair failed and we were unable to recover it. 00:39:25.358 [2024-10-13 14:35:28.764450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.358 [2024-10-13 14:35:28.764480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.358 qpair failed and we were unable to recover it. 00:39:25.358 [2024-10-13 14:35:28.764852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.358 [2024-10-13 14:35:28.764881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.358 qpair failed and we were unable to recover it. 00:39:25.358 [2024-10-13 14:35:28.765242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.358 [2024-10-13 14:35:28.765285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.358 qpair failed and we were unable to recover it. 00:39:25.358 [2024-10-13 14:35:28.765530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.765562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.765934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.765963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.766314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.766345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.766685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.766715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.767087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.767116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.767501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.767530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.767859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.767888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.768223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.768252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.768621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.768650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.769010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.769039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.769417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.769449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.769704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.769732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.770087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.770124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.770492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.770522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.770879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.770908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.771250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.771280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.771687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.771716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.772101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.772132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.772512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.772544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.772888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.772921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.773323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.773353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.773719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.773749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.774118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.774148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.774520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.774548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.774910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.774947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.775207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.775240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.775619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.775649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.776019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.776049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.776423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.776452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.776818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.776848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.777217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.777248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.777608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.777637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.777990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.778018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.778382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.778412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.778768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.778797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.779145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.779175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.779551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.779581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.779984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.780012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.780377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.359 [2024-10-13 14:35:28.780409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.359 qpair failed and we were unable to recover it. 00:39:25.359 [2024-10-13 14:35:28.780762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.780797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.781042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.781080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.781465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.781494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.781856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.781884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.782244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.782273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.782647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.782676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.783042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.783080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.783430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.783457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.783706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.783735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.783960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.783989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.784365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.784396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.784758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.784789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.785160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.785190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.785560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.785589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.785820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.785852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.786238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.786268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.786520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.786551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.786918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.786947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.787327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.787359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.787583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.787613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.787881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.787909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.788278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.788309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.788564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.788592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.788791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.788820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.789183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.789214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.789576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.789607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.789964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.789994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.790246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.790281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.790630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.790660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.791010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.791041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.791422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.791456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.791813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.791842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.792209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.792239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.792601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.792630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.792974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.793004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.793368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.793399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.793694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.793723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.794089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.794120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.794472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.794501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.794875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.360 [2024-10-13 14:35:28.794905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.360 qpair failed and we were unable to recover it. 00:39:25.360 [2024-10-13 14:35:28.795211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.795241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.795601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.795631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.795993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.796022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.796411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.796440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.796767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.796796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.797039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.797079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.797462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.797492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.797857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.797885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.798246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.798276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.798634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.798664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.799033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.799072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.799410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.799440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.799799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.799828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.800197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.800227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.800568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.800599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.800947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.800979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.801325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.801355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.801730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.801759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.802151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.802180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.802526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.802554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.802915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.802945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.803295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.803326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.803691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.803720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.804082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.804113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.804473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.804501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.804761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.804790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.805126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.805156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.805494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.805529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.805888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.805917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.806290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.806319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.806648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.806677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.807053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.807091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.807451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.807480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.807814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.807843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.808206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.808236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.808598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.808626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.808992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.809022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.809385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.809417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.809667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.361 [2024-10-13 14:35:28.809697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.361 qpair failed and we were unable to recover it. 00:39:25.361 [2024-10-13 14:35:28.810044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.810082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.810453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.810482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.810839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.810868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.811221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.811252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.811608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.811637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.812000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.812028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.812436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.812466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.812833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.812862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.813230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.813259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.813639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.813668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.814027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.814056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.814427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.814455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.814793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.814821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.815075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.815107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.815490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.815519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.815755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.815786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.816022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.816053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.816487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.816516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.816878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.816907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.817248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.817279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.817512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.817541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1990066 Killed "${NVMF_APP[@]}" "$@" 00:39:25.362 [2024-10-13 14:35:28.817905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.817945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.818282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.818311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.818685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.818714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 14:35:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:39:25.362 [2024-10-13 14:35:28.818961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.818993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 14:35:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:39:25.362 [2024-10-13 14:35:28.819405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.819436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 14:35:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:25.362 [2024-10-13 14:35:28.819665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.819703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 14:35:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:25.362 14:35:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:25.362 [2024-10-13 14:35:28.820053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.820093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.820428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.820458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.820837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.820865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.821213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.821242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.821605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.821635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.822009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.822038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.822416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.822446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.822800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.822829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.823183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.823214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.362 qpair failed and we were unable to recover it. 00:39:25.362 [2024-10-13 14:35:28.823654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.362 [2024-10-13 14:35:28.823683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.824046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.824091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.824326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.824355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.824706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.824735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.825126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.825156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.825523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.825552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.825920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.825950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.826290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.826319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.826573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.826601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.826958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.826987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.827328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.827358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.827733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.827763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.828143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.828173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 14:35:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1991014 00:39:25.363 [2024-10-13 14:35:28.828548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.828578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 14:35:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1991014 00:39:25.363 [2024-10-13 14:35:28.828827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.828856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 14:35:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:39:25.363 [2024-10-13 14:35:28.829096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 14:35:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1991014 ']' 00:39:25.363 [2024-10-13 14:35:28.829135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 14:35:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:25.363 [2024-10-13 14:35:28.829503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.829532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 14:35:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:25.363 14:35:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:25.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:25.363 [2024-10-13 14:35:28.829896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.829926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 14:35:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:25.363 [2024-10-13 14:35:28.830289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.830319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b9 14:35:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:25.363 0 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.830678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.830708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.831077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.831109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.831481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.831512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.831762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.831795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.832149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.832182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.832567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.832597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.832982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.833013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.833418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.833451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.833799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.833830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.834137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.834170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.834569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.834598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.834941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.834971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.363 [2024-10-13 14:35:28.835358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.363 [2024-10-13 14:35:28.835391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.363 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.835744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.835775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.836177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.836210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.836462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.836492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.836854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.836886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.837138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.837171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.837534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.837567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.837931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.837968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.838352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.838383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.838743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.838775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.839146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.839176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.839532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.839562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.839920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.839954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.840095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.840129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.840364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.840395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.840825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.840857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.841235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.841267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.841630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.841660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.842005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.842038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.842408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.842439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.842787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.842817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.843077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.843109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.843342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.843372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.843739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.843769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.844122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.844155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.844397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.844435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.844800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.844831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.845209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.845241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.845610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.845641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.845928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.845958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.846334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.846364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.846797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.846829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.847098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.847129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.847540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.847571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.847820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.847850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.848179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.848211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.848571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.848602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.848986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.849017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.849447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.849477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.849843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.849873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.850233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.364 [2024-10-13 14:35:28.850265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.364 qpair failed and we were unable to recover it. 00:39:25.364 [2024-10-13 14:35:28.850658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.850688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.850911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.850943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.851190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.851224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.851622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.851652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.852023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.852056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.852460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.852491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.852686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.852724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.853103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.853134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.853515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.853544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.853804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.853836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.854103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.854134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.854362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.854395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.854643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.854672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.855026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.855074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.855459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.855488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.855769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.855798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.856051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.856093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.856348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.856378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.856638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.856668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.856915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.856945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.857323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.857354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.857739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.857770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.858162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.858192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.858563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.858594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.858979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.859008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.859318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.859350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.859753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.859783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.860146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.860176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.860593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.860623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.861003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.861033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.861419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.861449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.861829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.861860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.862248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.862279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.862661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.862692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.863088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.863118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.863483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.863514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.863865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.863895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.864303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.864332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.864714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.864742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.865138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.365 [2024-10-13 14:35:28.865170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.365 qpair failed and we were unable to recover it. 00:39:25.365 [2024-10-13 14:35:28.865419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.865447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.865813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.865843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.866229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.866261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.866628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.866657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.867011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.867043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.867421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.867452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.867826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.867862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.868128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.868158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.868538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.868567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.868931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.868960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.869333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.869364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.869735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.869765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.870146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.870177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.870443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.870475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.870834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.870867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.871266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.871299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.871664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.871694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.872079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.872114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.872483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.872512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.872906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.872935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.873235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.873265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.873649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.873678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.874047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.874090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.874496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.874525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.874899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.874928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.875283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.875312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.875652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.875683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.875938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.875967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.876333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.876365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.876709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.876738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.877107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.877141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.877507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.877537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.877796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.877824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.878138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.878171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.878426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.878456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.878822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.878853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.879019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.879048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.879461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.879492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.879895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.879925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.880203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.880233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.366 qpair failed and we were unable to recover it. 00:39:25.366 [2024-10-13 14:35:28.880641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.366 [2024-10-13 14:35:28.880671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.881101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.881132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.881469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.881500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.881853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.881882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.882311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.882343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.882690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.882717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.883116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.883152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.883519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.883552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.883954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.883984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.884241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.884271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.884657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.884687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.885038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.885077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.885539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.885568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.885924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.885962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.886385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.886416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.886779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.886809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.887175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.887206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.887587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.887617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.887978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.888008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.888247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.888280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.888473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.888505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.888758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.888791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.889030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.889089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.889462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.889491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.889857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.889886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.890249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.890280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.890655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.890684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.890940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.890969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.891339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.891369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.891741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.891770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.892128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.892159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.892533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.892562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.892925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.892954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.893321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.367 [2024-10-13 14:35:28.893349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.367 qpair failed and we were unable to recover it. 00:39:25.367 [2024-10-13 14:35:28.893686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.893715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.894092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.894123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.894562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.894591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.894928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.894958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.895329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.895334] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:39:25.368 [2024-10-13 14:35:28.895360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.895407] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:25.368 [2024-10-13 14:35:28.895577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.895610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.895965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.895993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.896304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.896333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.896714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.896744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.897123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.897153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.897511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.897541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.897915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.897945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.898332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.898363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.898736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.898767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.899029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.899073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.899456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.899488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.899856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.899888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.900283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.900317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.900612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.900641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.901009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.901039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.901390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.901420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.901788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.901818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.902222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.902255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.902619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.902649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.903034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.903080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.903444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.903474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.903853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.903883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.904122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.904153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.904562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.904593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.904980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.905009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.905384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.905416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.905778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.905809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.906178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.906209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.906590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.906620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.906973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.907003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.907373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.907404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.907782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.907813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.908141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.908173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.908572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.368 [2024-10-13 14:35:28.908602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.368 qpair failed and we were unable to recover it. 00:39:25.368 [2024-10-13 14:35:28.908978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.909008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.909402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.909433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.909813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.909842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.910197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.910228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.910588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.910617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.910929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.910959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.911328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.911358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.911728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.911758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.912141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.912172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.912536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.912566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.912946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.912975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.913344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.913375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.913735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.913764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.914039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.914076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.914470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.914500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.914858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.914887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.915256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.915286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.915635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.915665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.916044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.916084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.916449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.916479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.916739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.916768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.917127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.917158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.917415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.917446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.917789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.917820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.918184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.918215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.918599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.918640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.919020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.919050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.919312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.919344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.919706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.919735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.920103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.920134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.920536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.920568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.920938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.920969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.921349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.921380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.921717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.921749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.922143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.922174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.922488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.922520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.922922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.922952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.923194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.923224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.923586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.923617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.923989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.369 [2024-10-13 14:35:28.924020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.369 qpair failed and we were unable to recover it. 00:39:25.369 [2024-10-13 14:35:28.924245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.924276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.924672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.924703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.925100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.925132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.925359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.925391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.925807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.925837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.926222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.926253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.926613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.926644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.926985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.927014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.927381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.927411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.927644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.927675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.927998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.928028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.928458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.928488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.928860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.928890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.929265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.929295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.929530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.929563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.929995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.930025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.930408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.930441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.930793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.930823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.931058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.931101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.931544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.931574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.931926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.931956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.932219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.932249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.932609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.932637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.933009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.933038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.933433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.933462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.933734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.933769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.934127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.934159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.934598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.934628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.934990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.935019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.935404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.935434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.935822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.935852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.936215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.936246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.936668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.936697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.936938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.936967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.937215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.937245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.937622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.937651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.938024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.938052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.938417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.938447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.938749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.938777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.939147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.939177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.370 qpair failed and we were unable to recover it. 00:39:25.370 [2024-10-13 14:35:28.939406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.370 [2024-10-13 14:35:28.939437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.939804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.939832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.940222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.940252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.940628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.940657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.941041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.941079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.941435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.941464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.941842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.941871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.942250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.942281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.942629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.942658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.943037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.943077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.943446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.943474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.943865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.943893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.944236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.944268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.944650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.944679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.945042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.945081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.945335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.945364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.945717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.945746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.946124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.946154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.946528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.946556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.946903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.946931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.947314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.947344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.947597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.947629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.947983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.948011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.948315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.948344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.948694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.948723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.949096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.949133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.949469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.949498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.949834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.949863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.950230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.950259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.950623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.950652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.951026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.951054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.951426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.951456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.951886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.951915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.952283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.952313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.952669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.952699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.953043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.371 [2024-10-13 14:35:28.953082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.371 qpair failed and we were unable to recover it. 00:39:25.371 [2024-10-13 14:35:28.953441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.953470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.953828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.953856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.954234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.954263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.954602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.954631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.954998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.955027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.955379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.955410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.955783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.955812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.956169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.956199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.956568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.956598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.956969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.956998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.957381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.957409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.957676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.957705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.958036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.958075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.958309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.958341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.958790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.958819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.959182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.959213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.959591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.959620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.960002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.960030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.960388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.960418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.960782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.960811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.961188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.961219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.961611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.961639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.962022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.962050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.962437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.962467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.962847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.962876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.963260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.963289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.963662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.963691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.963951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.963980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.964338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.964368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.964742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.964771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.965135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.965165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.965507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.965536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.965909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.965937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.966202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.966233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.966598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.966627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.966984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.967013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.967361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.967391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.967673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.967702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.967930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.967961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.372 [2024-10-13 14:35:28.968216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.372 [2024-10-13 14:35:28.968248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.372 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.968617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.968646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.969031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.969059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.969460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.969490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.969826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.969854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.970226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.970256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.970494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.970522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.970911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.970940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.971374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.971404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.971771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.971799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.972046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.972083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.972459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.972489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.972837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.972866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.973241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.973270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.973637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.973666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.974038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.974078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.974342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.974371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.974749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.974784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.975023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.975051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.975453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.975483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.975921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.975949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.976201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.976231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.976604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.976633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.976852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.976880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.977119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.977149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.977532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.977560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.977977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.978006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.978370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.978400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.978768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.978798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.979164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.979194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.979458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.979487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.979890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.979920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.980290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.980319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.980765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.980794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.981056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.981095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.981363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.981392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.981756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.981785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.982160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.982190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.982561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.982589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.983024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.983053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.983310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.373 [2024-10-13 14:35:28.983341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.373 qpair failed and we were unable to recover it. 00:39:25.373 [2024-10-13 14:35:28.983704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.983734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.984101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.984132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.984491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.984520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.984899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.984929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.985303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.985335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.985698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.985727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.986120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.986150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.986530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.986558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.986914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.986942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.987290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.987320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.987686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.987716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.988081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.988112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.988478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.988506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.988809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.988837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.989211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.989241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.989504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.989532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.989918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.989952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.990298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.990330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.990702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.990731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.991092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.991121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.991452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.991480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.991848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.991876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.992255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.992284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.992610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.992638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.993039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.993077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.993448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.993476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.993851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.993880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.994239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.994269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.994594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.994623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.994984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.995013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.995386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.995416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.995779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.995807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.996182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.996212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.996587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.996616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.996981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.997010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.997379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.997408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.997774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.997802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.998181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.998211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.998667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.998695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.999055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.374 [2024-10-13 14:35:28.999093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.374 qpair failed and we were unable to recover it. 00:39:25.374 [2024-10-13 14:35:28.999349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:28.999378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:28.999741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:28.999769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.000232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.000263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.000653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.000682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.001053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.001104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.001465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.001495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.001612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.001642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.001975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.002004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.002360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.002390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.002766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.002795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.002955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.002988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.003379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.003409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.003792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.003821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.004204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.004232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.004486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.004518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.004908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.004938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.005147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.005186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.005432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.005463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.005856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.005885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.006251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.006282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.006515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.006545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.006920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.006948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.007203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.007236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.007616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.007645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.008030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.008059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.008434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.008464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.008727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.008755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.009110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.009141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.009600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.009629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.009978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.010008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.010376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.010406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.010780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.010810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.011163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.011192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.011603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.011633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.011996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.012025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.012412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.012442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.012827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.012857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.013233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.013264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.013636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.013665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.375 [2024-10-13 14:35:29.013910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.375 [2024-10-13 14:35:29.013938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.375 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.014293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.014323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.014688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.014718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.015078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.015109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.015340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.015371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.015730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.015759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.016016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.016044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.016489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.016519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.016870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.016899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.017248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.017278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.017628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.017656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.018027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.018055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.018494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.018523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.018883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.018910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.019149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.019179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.019596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.019626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.019983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.020012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.020410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.020447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.020761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.020790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.021144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.021174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.021548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.021577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.021912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.021940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.022202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.022234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.022485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.022515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.022889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.022917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.023292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.023321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.023658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.023687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.024025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.024054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.024416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.024446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.024812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.024842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.025208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.025239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.025602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.025631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.025992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.026020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.026403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.026433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.026791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.026820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.027047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.027084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.376 [2024-10-13 14:35:29.027480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.376 [2024-10-13 14:35:29.027509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.376 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.027761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.027789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.028141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.028170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.028569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.028597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.028950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.028986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.029238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.029271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.029517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.029549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.029887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.029917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.030240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.030277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.030603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.030632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.030949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.030978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.031316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.031345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.031573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.031604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.031982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.032011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.032368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.032399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.032744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.032774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.033136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.033167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.033398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.033428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.033834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.033864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.034184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.034215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.034605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.034633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.034995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.035030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.035396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.035425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.035764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.035794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.036144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.036175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.036556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.036585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.036935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.036966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.037260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.037290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.037644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.037673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.037979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.038007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.038441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.038473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.038634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.038662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.039039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.039077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.039520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.039549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.039914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.039943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.040123] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:25.377 [2024-10-13 14:35:29.040226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.040257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.040664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.040693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.041048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.041090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.041341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.041373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.041710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.377 [2024-10-13 14:35:29.041741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.377 qpair failed and we were unable to recover it. 00:39:25.377 [2024-10-13 14:35:29.042004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.042033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.042304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.042333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.042699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.042728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.043093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.043123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.043495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.043523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.043853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.043882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.044239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.044269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.044636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.044672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.045012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.045042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.045389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.045419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.045658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.045687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.045949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.045979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.046329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.046361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.046728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.046757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.047059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.047099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.047488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.047520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.047881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.047913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.048240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.048269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.048520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.048549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.048901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.048932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.049258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.049288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.049659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.049688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.050040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.050079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.050430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.050459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.050803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.050832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.051193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.051224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.051469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.051498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.051881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.051910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.378 [2024-10-13 14:35:29.052155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.378 [2024-10-13 14:35:29.052189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.378 qpair failed and we were unable to recover it. 00:39:25.652 [2024-10-13 14:35:29.052608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.652 [2024-10-13 14:35:29.052639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.652 qpair failed and we were unable to recover it. 00:39:25.652 [2024-10-13 14:35:29.053004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.652 [2024-10-13 14:35:29.053032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.652 qpair failed and we were unable to recover it. 00:39:25.652 [2024-10-13 14:35:29.053418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.652 [2024-10-13 14:35:29.053448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.652 qpair failed and we were unable to recover it. 00:39:25.652 [2024-10-13 14:35:29.053811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.652 [2024-10-13 14:35:29.053840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.652 qpair failed and we were unable to recover it. 00:39:25.652 [2024-10-13 14:35:29.054201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.652 [2024-10-13 14:35:29.054230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.652 qpair failed and we were unable to recover it. 00:39:25.652 [2024-10-13 14:35:29.054614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.652 [2024-10-13 14:35:29.054645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.652 qpair failed and we were unable to recover it. 00:39:25.652 [2024-10-13 14:35:29.054979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.652 [2024-10-13 14:35:29.055010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.652 qpair failed and we were unable to recover it. 00:39:25.652 [2024-10-13 14:35:29.055383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.652 [2024-10-13 14:35:29.055416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.652 qpair failed and we were unable to recover it. 00:39:25.652 [2024-10-13 14:35:29.055776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.652 [2024-10-13 14:35:29.055805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.652 qpair failed and we were unable to recover it. 00:39:25.652 [2024-10-13 14:35:29.056171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.652 [2024-10-13 14:35:29.056201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.652 qpair failed and we were unable to recover it. 00:39:25.652 [2024-10-13 14:35:29.056585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.652 [2024-10-13 14:35:29.056615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.652 qpair failed and we were unable to recover it. 00:39:25.652 [2024-10-13 14:35:29.056982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.652 [2024-10-13 14:35:29.057013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.652 qpair failed and we were unable to recover it. 00:39:25.652 [2024-10-13 14:35:29.057263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.652 [2024-10-13 14:35:29.057293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.652 qpair failed and we were unable to recover it. 00:39:25.652 [2024-10-13 14:35:29.057629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.652 [2024-10-13 14:35:29.057659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.652 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.058000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.058028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.058371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.058402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.058766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.058796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.059045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.059086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.059461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.059496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.059862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.059891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.060239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.060270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.060612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.060642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.061004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.061033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.061409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.061439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.061676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.061705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.062045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.062085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.062455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.062484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.062853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.062883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.063226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.063256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.063585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.063614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.063992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.064021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.064362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.064393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.064763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.064792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.065172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.065202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.065566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.065595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.065959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.065988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.066369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.066399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.066775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.066806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.067134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.067165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.067600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.067630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.067985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.068015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.068375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.068408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.068774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.068807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.069145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.069177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.069512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.069548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.069956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.069988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.070325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.070357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.070718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.070748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.071122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.071152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.071535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.071563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.071915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.071943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.072361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.072392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.072719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.653 [2024-10-13 14:35:29.072747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.653 qpair failed and we were unable to recover it. 00:39:25.653 [2024-10-13 14:35:29.073102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.073133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.073541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.073571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.073900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.073929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.074285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.074316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.074725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.074755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.074883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.074922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.075252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.075283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.075537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.075566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.075970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.075999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.076227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.076259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.076634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.076663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.077032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.077061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.077288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.077317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.077675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.077704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.078077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.078108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.078442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.078470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.078843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.078874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.079093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.079123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.079374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.079402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.079796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.079825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.080200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.080231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.080598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.080628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.080864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.080896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.081285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.081317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.081660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.081688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.082138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.082168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.082511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.082543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.082786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.082819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.083229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.083260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.083633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.083663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.084034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.084089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.084439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.084469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.084700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.084731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.085086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.085118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.085472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.085503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.085871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.085901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.086241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.086272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.086675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.086705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.086978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.087008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.087408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.654 [2024-10-13 14:35:29.087439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.654 qpair failed and we were unable to recover it. 00:39:25.654 [2024-10-13 14:35:29.087799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.087829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.088201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.088234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.090841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.090910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.091352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.091391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.091784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.091814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.092059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.092131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.092170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:25.655 [2024-10-13 14:35:29.092511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.092543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.092907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.092938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.093305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.093336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.093704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.093734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.094108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.094137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.094547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.094576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.094959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.094990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.095381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.095412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.095771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.095801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.096091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.096126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.096494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.096523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.096890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.096921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.097300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.097330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.097714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.097744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.098009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.098041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.098427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.098458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.098819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.098848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.099202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.099233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.099624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.099654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.100002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.100032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.100344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.100374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.100787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.100817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.101181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.101211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.101570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.101600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.101963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.101993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.102377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.102408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.102655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.102685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.102949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.102979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.103353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.103385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.103530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.103558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.103784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.103814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.104201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.104233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.104512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.104540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.104889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.655 [2024-10-13 14:35:29.104920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.655 qpair failed and we were unable to recover it. 00:39:25.655 [2024-10-13 14:35:29.105163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.105193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.105573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.105603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.105841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.105870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.106241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.106272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.106638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.106668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.107032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.107081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.107489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.107520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.107877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.107906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.108295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.108327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.108590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.108621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.108973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.109002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.109351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.109382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.109641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.109671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.109995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.110023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.110383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.110413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.110780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.110812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.111149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.111180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.111543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.111573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.111928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.111959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.112319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.112351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.112722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.112754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.113136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.113172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.113534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.113566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.113827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.113859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.114186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.114216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.114573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.114603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.114894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.114925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.115280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.115311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.115680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.115711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.116108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.116140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.116490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.116521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.116956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.116986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.117356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.117387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.117823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.117858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.118218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.118249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.656 qpair failed and we were unable to recover it. 00:39:25.656 [2024-10-13 14:35:29.118503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.656 [2024-10-13 14:35:29.118533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.118887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.118917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.119306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.119337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.119570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.119599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.119985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.120017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.120452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.120487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.120846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.120883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.121126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.121158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.121522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.121553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.121925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.121956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.121969] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:25.657 [2024-10-13 14:35:29.122019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:25.657 [2024-10-13 14:35:29.122028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:25.657 [2024-10-13 14:35:29.122038] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:25.657 [2024-10-13 14:35:29.122045] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:25.657 [2024-10-13 14:35:29.122292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.122325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.122587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.122617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.122985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.123017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.123381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.123412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.123786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.123816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.124052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.124103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.124487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.124395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:25.657 [2024-10-13 14:35:29.124517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.124565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:25.657 [2024-10-13 14:35:29.124720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:25.657 [2024-10-13 14:35:29.124720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:39:25.657 [2024-10-13 14:35:29.124851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.124882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.125300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.125330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.125583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.125612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.125986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.126022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.126407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.126440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.126879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.126911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.127351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.127383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.127765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.127796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.128158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.128188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.128550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.128581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.128919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.128948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.129298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.129332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.129577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.129614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.129898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.129928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.130087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.130119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.130365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.130397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.130825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.130855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.131232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.131271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.131673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.131705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.657 qpair failed and we were unable to recover it. 00:39:25.657 [2024-10-13 14:35:29.131810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.657 [2024-10-13 14:35:29.131841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.132059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.132148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.132519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.132552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.132683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.132713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.133099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.133133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.133503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.133535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.133799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.133830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.134188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.134221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.134465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.134497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.134745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.134776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.135215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.135249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.135631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.135662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.135902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.135930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.136204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.136235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.136609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.136640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.136991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.137026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.137427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.137461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.137818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.137848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.138213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.138249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.138606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.138637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.138888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.138917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.139175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.139212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.139625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.139658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.140001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.140040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.140301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.140341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.140668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.140705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.141047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.141090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.141530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.141560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.141945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.141974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.142333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.142363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.142705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.142735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.143105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.143135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.143518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.143548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.143918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.143946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.144310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.144340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.144707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.144737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.145109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.145140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.145396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.145426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.145817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.145847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.146222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.146253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.658 qpair failed and we were unable to recover it. 00:39:25.658 [2024-10-13 14:35:29.146626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.658 [2024-10-13 14:35:29.146655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.147010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.147038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.147345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.147376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.147743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.147772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.148160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.148191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.148560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.148588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.148956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.148984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.149325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.149356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.149711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.149740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.149985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.150015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.150400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.150431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.150794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.150824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.151047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.151087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.151466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.151495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.151848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.151876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.151990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.152018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.152195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.152229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.152591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.152622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.152852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.152883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.153128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.153160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.153515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.153547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.153939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.153970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.154316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.154346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.154716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.154747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.154996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.155033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.155265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.155297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.155508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.155537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.155904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.155934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.156201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.156231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.156475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.156503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.156746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.156776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.157028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.157057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.157304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.157341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.157567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.157596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.157826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.157856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.158099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.158129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.158538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.158568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.158802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.158832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.159200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.159230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.159449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.159479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.659 [2024-10-13 14:35:29.159712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.659 [2024-10-13 14:35:29.159742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.659 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.160223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.160253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.160519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.160548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.160941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.160971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.161199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.161231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.161620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.161651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.161993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.162023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.162396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.162429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.162864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.162893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.163243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.163274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.163632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.163663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.163882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.163913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.164301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.164332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.164703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.164733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.165093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.165124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.165363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.165397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.165601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.165630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.166024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.166055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.166423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.166454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.166823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.166854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.167230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.167259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.167629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.167657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.168014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.168044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.168393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.168423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.168783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.168819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.169182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.169212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.169600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.169629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.169885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.169917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.170326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.170356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.170739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.170768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.170996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.171025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.171397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.171427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 [2024-10-13 14:35:29.171519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.660 [2024-10-13 14:35:29.171546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:25.660 qpair failed and we were unable to recover it. 00:39:25.660 Read completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Write completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Write completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Write completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Write completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Write completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Write completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Read completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Read completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Read completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Read completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Write completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Read completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Read completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Read completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Write completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Read completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Read completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Read completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Read completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Read completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Write completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Read completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Read completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Read completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Write completed with error (sct=0, sc=8) 00:39:25.660 starting I/O failed 00:39:25.660 Write completed with error (sct=0, sc=8) 00:39:25.661 starting I/O failed 00:39:25.661 Write completed with error (sct=0, sc=8) 00:39:25.661 starting I/O failed 00:39:25.661 Write completed with error (sct=0, sc=8) 00:39:25.661 starting I/O failed 00:39:25.661 Read completed with error (sct=0, sc=8) 00:39:25.661 starting I/O failed 00:39:25.661 Write completed with error (sct=0, sc=8) 00:39:25.661 starting I/O failed 00:39:25.661 Write completed with error (sct=0, sc=8) 00:39:25.661 starting I/O failed 00:39:25.661 [2024-10-13 14:35:29.172379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:25.661 [2024-10-13 14:35:29.172847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.172908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.173312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.173412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.173620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.173661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.174117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.174151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.174530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.174559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.174952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.174982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.175203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.175235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.175606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.175636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.176020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.176049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.176459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.176489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.176854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.176884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.177225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.177268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.177447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.177475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.177852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.177883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.178169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.178200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.178472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.178502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.178746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.178776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.178991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.179019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.179272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.179303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.179516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.179545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.179797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.179829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.180185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.180214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.180462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.180491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.180885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.180914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.181146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.181178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.181429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.181459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.181662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.181690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.182071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.182101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.182331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.182361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.182728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.182757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.183129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.183192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.183559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.183589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.661 [2024-10-13 14:35:29.183953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.661 [2024-10-13 14:35:29.183983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.661 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.184431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.184461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.184701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.184730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.185111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.185141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.185489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.185520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.185729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.185758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.186124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.186155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.186552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.186582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.186918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.186947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.187332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.187363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.187717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.187747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.188123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.188153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.188530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.188559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.188937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.188968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.189222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.189252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.189636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.189666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.190030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.190061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.190443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.190473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.190842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.190873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.191117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.191154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.191405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.191433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.191838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.191867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.192108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.192141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.192236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.192265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.192580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.192611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.193011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.193040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.193144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.193173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.193401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.193429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.193652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.193683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.194041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.194080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.194431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.194471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.194690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.194721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.195094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.195126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.195386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.195418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.195799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.195828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.196076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.196106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.196346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.196376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.196592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.196620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.196983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.197012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.197389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.197420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.197600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.662 [2024-10-13 14:35:29.197629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.662 qpair failed and we were unable to recover it. 00:39:25.662 [2024-10-13 14:35:29.197887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.197917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.198221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.198252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.198484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.198516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.198865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.198895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.199378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.199410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.199784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.199814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.200176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.200207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.200459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.200488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.200851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.200880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.201242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.201273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.201521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.201554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.201901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.201931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.202165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.202194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.202438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.202467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.202820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.202849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.203204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.203235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.203628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.203658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.204003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.204032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.204409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.204445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.204716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.204746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.204961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.204990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.205232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.205262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.205502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.205531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.205758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.205788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.206008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.206036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.206259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.206289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.206666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.206697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.207038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.207088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.207332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.207365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.207596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.207624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.207863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.207893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.208119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.208149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.208409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.208438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.208795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.208825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.209199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.209229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.209457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.209485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.209873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.209901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.210236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.210266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.210497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.210526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.210894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.663 [2024-10-13 14:35:29.210923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.663 qpair failed and we were unable to recover it. 00:39:25.663 [2024-10-13 14:35:29.211130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.211161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.211525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.211554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.211929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.211958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.212321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.212352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.212694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.212723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.213007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.213036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.213277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.213307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.213673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.213702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.213960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.213990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.214366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.214396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.214766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.214795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.215028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.215056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.215343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.215373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.215796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.215825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.216182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.216214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.216427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.216456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.216839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.216868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.217297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.217327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.217701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.217737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.217944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.217973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.218367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.218397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.218765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.218795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.219154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.219184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.219424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.219453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.219838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.219866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.220103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.220133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.220259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.220286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.220674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.220703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.221081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.221111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.221471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.221502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.221878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.221906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.222144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.222176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.222405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.222436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.222766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.222803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.223017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.223045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.223462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.223491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.223711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.223740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.224117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.224146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.224378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.224407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.224670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.664 [2024-10-13 14:35:29.224703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.664 qpair failed and we were unable to recover it. 00:39:25.664 [2024-10-13 14:35:29.225094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.225123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.225477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.225505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.225880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.225909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.226243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.226272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.226506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.226534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.226921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.226951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.227207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.227237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.227360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.227390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.227734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.227763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.228137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.228167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.228567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.228596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.228957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.228986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.229200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.229229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.229656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.229684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.229990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.230021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.230375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.230405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.230770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.230799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.231158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.231188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.231539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.231574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.231939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.231968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.232342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.232372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.232750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.232779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.233148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.233177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.233545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.233575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.233941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.233971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.234345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.234376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.234584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.234613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.234973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.235000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.235344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.235374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.235738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.235768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.235966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.235994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.236233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.236264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.236650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.236680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.237032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.237096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.237313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.665 [2024-10-13 14:35:29.237342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.665 qpair failed and we were unable to recover it. 00:39:25.665 [2024-10-13 14:35:29.237702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.237731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.237978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.238009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.238256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.238287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.238493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.238522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.238875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.238904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.239124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.239153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.239534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.239562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.239940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.239968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.240356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.240386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.240732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.240762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.241148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.241177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.241580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.241609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.241812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.241842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.242204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.242234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.242599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.242627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.242852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.242881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.243118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.243146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.243519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.243547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.243910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.243939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.244319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.244349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.244505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.244538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.244959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.244988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.245240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.245270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.245650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.245686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.246051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.246089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.246453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.246483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.246707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.246737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.246975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.247004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.247260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.247293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.247536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.247565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.247959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.247990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.248359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.248389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.248750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.248780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.249129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.249158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.249543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.249573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.249950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.249980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.250328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.250359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.250699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.250728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.251101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.251130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.251370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.251398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.666 qpair failed and we were unable to recover it. 00:39:25.666 [2024-10-13 14:35:29.251784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.666 [2024-10-13 14:35:29.251812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.252191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.252220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.252572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.252610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.252974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.253004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.253246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.253275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.253634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.253663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.253980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.254008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.254337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.254366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.254570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.254598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.254957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.254988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.255380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.255410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.255777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.255805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.256165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.256195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.256565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.256593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.256970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.256999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.257374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.257405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.257765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.257794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.258020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.258048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.258398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.258427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.258769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.258797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.259075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.259105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.259456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.259485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.259864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.259892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.260257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.260293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.260721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.260750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.260975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.261005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.261293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.261323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.261695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.261726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.262105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.262135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.262485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.262514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.262733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.262765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.262983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.263011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.263263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.263295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.263638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.263667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.263913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.263945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.264161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.264192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.264408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.264436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.264712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.264744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.265121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.265150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.265539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.265568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.667 [2024-10-13 14:35:29.265793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.667 [2024-10-13 14:35:29.265822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.667 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.266186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.266216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.266453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.266482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.266697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.266728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.267109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.267140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.267529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.267558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.267907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.267935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.268151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.268181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.268420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.268450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.268897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.268926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.269140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.269180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.269468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.269497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.269722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.269750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.270017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.270050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.270447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.270477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.270684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.270714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.271087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.271116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.271467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.271497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.271874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.271903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.272287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.272317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.272578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.272607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.273006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.273034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.273424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.273455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.273914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.273944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.274296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.274326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.274548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.274576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.274836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.274865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.275244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.275274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.275649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.275679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.276041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.276085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.276464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.276492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.276874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.276903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.277271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.277301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.277699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.277729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.278097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.278127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.278465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.278495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.278861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.278889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.279112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.279141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.279503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.279532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.279882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.279911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.280310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.668 [2024-10-13 14:35:29.280339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.668 qpair failed and we were unable to recover it. 00:39:25.668 [2024-10-13 14:35:29.280726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.280755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.281128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.281159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.281534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.281563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.281937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.281967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.282319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.282348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.282698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.282727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.282975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.283005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.283367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.283398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.283657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.283686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.284032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.284074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.284434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.284464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.284841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.284870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.285121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.285153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.285375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.285404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.285514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.285543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.285778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.285807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.285894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.285921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.286058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.286096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.286391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.286419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.286798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.286826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.287200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.287230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.287562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.287591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.287964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.287992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.288352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.288382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.288759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.288788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.289149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.289179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.289544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.289574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.289954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.289984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.290277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.290307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.290670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.290699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.291047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.291087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.291435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.291464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.291730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.291758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.292109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.292139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.292538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.292569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.292828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.669 [2024-10-13 14:35:29.292857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.669 qpair failed and we were unable to recover it. 00:39:25.669 [2024-10-13 14:35:29.293222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.293252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.293392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.293419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.293671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.293699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.294041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.294078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.294323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.294352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.294572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.294601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.294844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.294873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.295157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.295187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.295391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.295429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.295763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.295792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.296178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.296209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.296586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.296614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.296995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.297023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.297263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.297301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.297672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.297702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.298083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.298113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.298472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.298502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.298661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.298690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.299054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.299093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.299354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.299386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.299635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.299663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.299767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.299796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Write completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Write completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Write completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Write completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Write completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Write completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Write completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Write completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 Read completed with error (sct=0, sc=8) 00:39:25.670 starting I/O failed 00:39:25.670 [2024-10-13 14:35:29.300606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:39:25.670 [2024-10-13 14:35:29.300954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.301021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.301469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.301574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.301849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.301884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.302439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.302543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.303034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.303088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.303435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.303538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.303994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.670 [2024-10-13 14:35:29.304033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.670 qpair failed and we were unable to recover it. 00:39:25.670 [2024-10-13 14:35:29.304445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.304477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.304730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.304759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.305343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.305447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.305962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.306000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.306405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.306438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.306860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.306890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.307098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.307129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.307488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.307517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.307890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.307918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.308298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.308330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.308720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.308750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.309130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.309161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.309609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.309637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.310000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.310036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.310312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.310342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.310707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.310737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.311098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.311128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.311491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.311527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.311878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.311907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.312260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.312291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.312652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.312681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.313053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.313099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.313347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.313375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.313741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.313770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.314148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.314179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.314552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.314581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.314913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.314943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.315239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.315269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.315526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.315559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.315736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.315765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.316142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.316171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.316413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.316442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.316608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.316638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.316887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.316920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.317284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.317315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.317556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.317589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.317971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.318000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.318377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.318408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.318764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.318794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.319136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.319166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.671 qpair failed and we were unable to recover it. 00:39:25.671 [2024-10-13 14:35:29.319538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.671 [2024-10-13 14:35:29.319567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.319947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.319977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.320389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.320419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.320801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.320830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.321197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.321228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.321458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.321487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.321946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.321975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.322333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.322364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.322757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.322787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.323009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.323037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.323464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.323494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.323848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.323876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.324237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.324267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.324509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.324541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.324882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.324913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.325278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.325310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.325658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.325690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.326056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.326103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.326334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.326363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.326599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.326627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.326990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.327020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.327390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.327421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.327634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.327662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.328029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.328058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.328496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.328525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.328887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.328916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.329269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.329299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.329633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.329663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.329874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.329902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.330241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.330271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.330629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.330658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.331028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.331058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.331441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.331475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.331842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.331874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.332089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.332118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.332505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.332534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.332829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.332858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.333240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.333274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.333535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.333567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.333803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.333834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.672 [2024-10-13 14:35:29.334188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.672 [2024-10-13 14:35:29.334219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.672 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.334468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.334500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.334861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.334893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.335248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.335280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.335670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.335701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.335927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.335956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.336169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.336205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.336442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.336474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.336722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.336752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.337156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.337187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.337404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.337436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.337795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.337825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.338099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.338128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.338540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.338572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.338938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.338969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.339228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.339258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.339633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.339662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.340001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.340032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.340422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.340453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.340675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.340704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.340986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.341018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.341247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.341277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.341658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.341687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.341927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.341955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.342051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.342091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.342678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.342788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.343265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.343311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.343678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.343709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.343936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.343966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.344355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.344390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.344761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.344790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.345289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.345326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.345709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.345739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.346093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.673 [2024-10-13 14:35:29.346126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.673 qpair failed and we were unable to recover it. 00:39:25.673 [2024-10-13 14:35:29.346487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.346519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.346917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.346948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.347166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.347199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.347558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.347588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.347967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.347997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.348367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.348400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.348619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.348650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.349024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.349055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.349433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.349464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.349779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.349812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.350203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.350233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.350491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.350522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.350760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.350791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.351147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.351179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.351556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.351585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.352030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.352061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.352395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.352423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.352651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.352683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.353030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.353086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.353524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.353554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.353899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.353930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.354297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.354328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.354706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.354736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.355181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.355216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.355605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.355637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.356007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.356039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.356271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.356302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.356661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.356692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.357106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.357136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.357361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.357390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.357625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.357656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.357894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.357925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.358084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.358114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.358546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.358576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.358796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.358825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.359197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.359228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.359679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.359710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.359801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.950 [2024-10-13 14:35:29.359828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.950 qpair failed and we were unable to recover it. 00:39:25.950 [2024-10-13 14:35:29.359967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.359998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.360363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.360395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.360770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.360801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.361164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.361198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.361578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.361607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.361834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.361863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.362117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.362149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.362433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.362467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.362706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.362737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.363100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.363132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.363387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.363417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.363779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.363810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.364059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.364100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.364478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.364516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.364720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.364752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.365121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.365153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.365530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.365559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.365925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.365955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.366291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.366323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.366668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.366699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.367090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.367123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.367377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.367408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.367774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.367804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.368375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.368413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.368787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.368824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.369183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.369217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.369577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.369607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.369853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.369883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.370115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.370147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.370505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.370536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.370899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.370928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.371310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.371341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.371706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.371739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.372119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.372152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.372611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.372640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.372866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.372895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.373263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.373294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.373516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.373547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.373823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.373857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.373987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.374022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.374420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.951 [2024-10-13 14:35:29.374460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.951 qpair failed and we were unable to recover it. 00:39:25.951 [2024-10-13 14:35:29.374696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.374727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.375097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.375129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.375499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.375531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.375769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.375798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.376001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.376032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.376399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.376430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.376780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.376812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.377182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.377215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.377588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.377618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.377976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.378005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.378374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.378404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.378773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.378802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.379188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.379220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.379471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.379502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.379902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.379933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.380177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.380209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.380600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.380630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.380981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.381013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.381376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.381410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.381779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.381809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.382073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.382106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.382494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.382524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.382742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.382772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.383138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.383171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.383547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.383578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.383785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.383814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.384019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.384057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.384318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.384349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.384756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.384787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.385144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.385174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.385465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.385498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.385733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.385764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.386129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.386161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.386520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.386553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.386933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.386964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.387184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.387217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.387598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.387627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.387837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.387865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.388245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.388277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.388637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.388672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.389073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.952 [2024-10-13 14:35:29.389105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.952 qpair failed and we were unable to recover it. 00:39:25.952 [2024-10-13 14:35:29.389474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.389503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.389866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.389895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.390153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.390183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.390426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.390457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.390680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.390711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.391010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.391040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.391424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.391456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.391682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.391713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.391960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.391989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.392224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.392255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.392616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.392645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.393012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.393043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.393338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.393368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.393716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.393756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.393974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.394005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.394374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.394407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.394778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.394808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.395163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.395194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.395424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.395456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.395821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.395850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.396219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.396250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.396623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.396651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.397041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.397081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.397424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.397453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.397677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.397707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.398084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.398115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.398527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.398558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.398803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.398834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.399191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.399223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.399444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.399473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.399716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.399746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.399986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.400015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.400429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.400463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.400710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.400741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.401090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.401121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.401365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.401394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.401687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.401717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.401972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.402005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.402356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.402388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.402728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.953 [2024-10-13 14:35:29.402758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.953 qpair failed and we were unable to recover it. 00:39:25.953 [2024-10-13 14:35:29.403009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.403039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.403257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.403286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.403530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.403559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.403796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.403826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.404087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.404116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.404494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.404523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.404798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.404827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.405184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.405215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.405590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.405620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.405977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.406006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.406456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.406487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.406851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.406881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.407244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.407274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.407717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.407753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.408100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.408132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.408382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.408412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.408776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.408805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.409226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.409257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.409661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.409692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.410037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.410075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.410495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.410525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.410896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.410924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.411159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.411188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.411556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.411585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.411812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.411840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.411993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.412024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.412271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.412301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.412701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.412732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.412940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.412969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.413324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.413354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.413614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.413646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.414016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.414045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.414312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.414341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.414704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.954 [2024-10-13 14:35:29.414732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.954 qpair failed and we were unable to recover it. 00:39:25.954 [2024-10-13 14:35:29.414967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.414997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.415255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.415287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.415663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.415693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.415928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.415957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.416371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.416401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.416765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.416795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.417052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.417101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.417504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.417535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.417784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.417812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.418167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.418197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.418555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.418584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.418953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.418981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.419211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.419240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.419615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.419646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.420024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.420053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.420289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.420318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.420675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.420704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.421083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.421113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.421485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.421514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.421866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.421896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.422285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.422316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.422680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.422710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.423081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.423111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.423458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.423488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.423703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.423732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.424113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.424144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.424423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.424453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.424823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.424852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.425235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.425265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.425602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.425633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.426011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.426041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.426359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.426389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.426741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.426771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.427143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.427175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.427396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.427425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.427809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.427837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.428209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.428240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.428468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.428496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.428818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.428847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.429216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.429247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.429468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.955 [2024-10-13 14:35:29.429497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.955 qpair failed and we were unable to recover it. 00:39:25.955 [2024-10-13 14:35:29.429887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.429917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.430162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.430194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.430401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.430431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.430797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.430828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.431183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.431214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.431605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.431636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.431999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.432030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.432307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.432338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.432686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.432716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.432950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.432981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.433380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.433412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.433651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.433681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.433930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.433959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.434296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.434326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.434565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.434595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.434700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.434729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4310 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.435147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.435252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.435589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.435627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.435848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.435879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.436108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.436142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.436415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.436453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.436832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.436863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.437087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.437119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.437500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.437530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.437934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.437965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.438468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.438508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.438767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.438801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.439037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.439078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.439223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.439253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.439494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.439524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.439879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.439908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.440148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.440182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.440549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.440580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.440803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.440835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.441088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.441122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.441378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.441409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.441767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.441796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.442172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.442206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.442457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.442488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.442841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.442871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.956 [2024-10-13 14:35:29.443229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.956 [2024-10-13 14:35:29.443260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.956 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.443634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.443664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.444029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.444059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.444448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.444480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.444845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.444878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.445247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.445278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.445655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.445693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.446105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.446136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.446388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.446421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.446822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.446854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.447211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.447242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.447470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.447502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.447713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.447744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.447961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.447993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.448246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.448277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.448492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.448523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.448733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.448763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.448988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.449018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.449333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.449365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.449587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.449617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.449852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.449883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.450120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.450153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.450417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.450450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.450810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.450840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.451093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.451124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.451516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.451546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.451859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.451891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.452264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.452294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.452646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.452683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.453073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.453103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.453390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.453418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.453788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.453817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.454025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.454055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.454348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.454379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.454695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.454723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.455100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.455130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.455479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.455508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.455884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.455915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.456302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.456332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.456643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.456672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.457042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.957 [2024-10-13 14:35:29.457078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.957 qpair failed and we were unable to recover it. 00:39:25.957 [2024-10-13 14:35:29.457440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.457470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.457845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.457874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.458255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.458287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.458497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.458525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.458910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.458940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.459154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.459191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.459581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.459611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.459926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.459955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.460311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.460340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.460714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.460743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.461117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.461146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.461528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.461558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.461803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.461832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.462212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.462242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.462449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.462477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.462849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.462878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.462967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.462995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.463224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.463253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.463610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.463640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.464006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.464036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.464410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.464440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.464828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.464858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.465232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.465262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.465643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.465673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.466109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.466139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.466426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.466455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.466803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.466832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.467049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.467087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.467445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.467474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.467744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.467774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.468130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.468160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.468505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.468536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.468660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.468688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.468940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.468968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.469406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.469436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.958 qpair failed and we were unable to recover it. 00:39:25.958 [2024-10-13 14:35:29.469547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.958 [2024-10-13 14:35:29.469576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.469804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.469833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.470200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.470231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.470585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.470615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.470989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.471018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.471376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.471414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.471791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.471820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.472191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.472222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.472586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.472616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.472862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.472891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.473301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.473337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.473722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.473750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.474120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.474150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.474525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.474554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.474912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.474942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.475152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.475181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.475539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.475568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.475776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.475805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.476190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.476221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.476572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.476602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.476940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.476970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.477221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.477251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.477629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.477659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.478001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.478029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.478248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.478279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.478670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.478700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.478923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.478952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.479327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.479358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.479706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.479735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.480094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.480123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.480385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.480413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.480774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.480803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.481019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.481047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.481449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.481480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.481865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.481895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.482249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.482279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.482652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.482682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.482936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.482966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.483223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.483255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.483450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.483480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.959 [2024-10-13 14:35:29.483705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.959 [2024-10-13 14:35:29.483734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.959 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.483950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.483979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.484259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.484289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.484516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.484544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.484799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.484828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.485096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.485126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.485500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.485528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.485753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.485784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.486146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.486176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.486327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.486355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.486714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.486750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.487110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.487141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.487389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.487417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.487634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.487662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.487906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.487936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.488148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.488178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.488416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.488445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.488829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.488859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.489195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.489226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.489613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.489642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.489880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.489908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.490281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.490310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.490545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.490573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.490947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.490977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.491362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.491393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.491727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.491756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.492140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.492169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.492400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.492428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.492796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.492826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.493197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.493228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.493576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.493605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.493985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.494015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.494415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.494446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.494809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.494839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.495044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.495083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.495527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.495557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.495901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.495932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.496289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.496320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.496694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.496723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.497185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.497214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.960 [2024-10-13 14:35:29.497584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.960 [2024-10-13 14:35:29.497613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.960 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.497958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.497988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.498329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.498361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.498725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.498754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.498979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.499010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.499381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.499412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.499632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.499660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.499878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.499909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.500299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.500330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.500720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.500749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.500965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.501002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.501105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.501134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.501335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.501364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.501603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.501632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.501874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.501903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.502284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.502316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.502496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.502526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.502739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.502768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.503050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.503089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.503355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.503383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.503600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.503631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.503850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.503879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.504256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.504287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.504636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.504665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.505044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.505082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.505442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.505471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.505674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.505703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.505940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.505968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.506195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.506225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.506472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.506501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.506763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.506791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.506936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.506968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.507114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.507145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.507487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.507517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.507763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.507794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.508221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.508251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.508564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.508601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.509014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.509043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.509459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.509490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.509785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.509813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.510182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.961 [2024-10-13 14:35:29.510212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.961 qpair failed and we were unable to recover it. 00:39:25.961 [2024-10-13 14:35:29.510595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.510625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.511011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.511039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.511300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.511331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.511758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.511788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.512149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.512178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.512552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.512581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.512950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.512979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.513324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.513352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.513728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.513758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.514133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.514171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.514522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.514559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.514935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.514964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.515351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.515381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.515736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.515774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.516015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.516045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.516454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.516484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.516842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.516875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.517240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.517270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.517483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.517512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.517849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.517878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.518238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.518309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.518670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.518701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.519083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.519115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.519478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.519509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.519886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.519917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.520289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.520318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.520691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.520720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.521079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.521110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.521486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.521515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.521728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.521756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.522127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.522157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.522393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.522422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.522664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.522693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.522784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.522814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.523051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.523088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.523458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.523487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.523678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.523708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.962 [2024-10-13 14:35:29.524112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.962 [2024-10-13 14:35:29.524143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.962 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.524372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.524400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.524774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.524802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.525168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.525198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.525420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.525450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.525659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.525688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.525911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.525941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.526315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.526347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.526596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.526625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.526845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.526874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.527248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.527277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.527534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.527563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.527928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.527965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.528311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.528343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.528697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.528727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.529038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.529073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.529357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.529387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.529765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.529794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.530021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.530049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.530448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.530479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.530857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.530887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.531261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.531291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.531536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.531565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.531926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.531956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.532352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.532382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.532753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.532783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.533132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.533162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.533383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.533412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.533777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.533806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.534025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.534057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.534468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.534498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.534902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.534933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.535292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.535322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.535699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.535729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.535981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.536014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.536202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.536233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.536575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.536606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.537030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.537060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.963 [2024-10-13 14:35:29.537295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.963 [2024-10-13 14:35:29.537325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.963 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.537539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.537571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.537970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.538000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.538358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.538389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.538631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.538663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.539037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.539075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.539296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.539327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.539481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.539509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.539938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.539968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.540316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.540353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.540725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.540753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.541095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.541125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.541364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.541393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.541766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.541794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.542020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.542095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.542446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.542476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.542736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.542765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.543143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.543172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.543402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.543431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.543796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.543825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.544094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.544124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.544592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.544622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.544969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.545001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.545273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.545306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.545537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.545565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.545807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.545836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.546101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.546130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.546499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.546530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.546978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.547008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.547412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.547444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.547675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.547703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.548083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.548114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.548487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.548517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.548885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.548915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.549276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.549306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.549724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.549753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.550111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.550142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.550529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.550558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.550938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.550969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.551332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.551363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.551717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.964 [2024-10-13 14:35:29.551747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.964 qpair failed and we were unable to recover it. 00:39:25.964 [2024-10-13 14:35:29.552115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.552146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.552519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.552548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.552773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.552802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.553051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.553086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.553341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.553370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.553738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.553769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.554145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.554176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.554415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.554447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.554804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.554835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.555216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.555246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.555465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.555494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.555753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.555783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.556016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.556046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.556302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.556338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.556725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.556755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.556993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.557021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.557121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.557149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.557599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.557629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.557986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.558016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.558117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.558146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.558578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.558608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.558818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.558847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.559212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.559241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.559594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.559625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.560007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.560037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.560425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.560455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.560834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.560862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.561214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.561245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.561505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.561534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.561890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.561919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.562131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.562163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.562510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.562540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.562909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.562942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.563317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.563348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.563716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.563746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.564113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.564150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.564499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.564529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.564876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.564904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.565291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.565332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.565657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.965 [2024-10-13 14:35:29.565687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.965 qpair failed and we were unable to recover it. 00:39:25.965 [2024-10-13 14:35:29.566052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.566090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.566446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.566474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.566864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.566894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.567257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.567288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.567659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.567688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.568082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.568112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.568470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.568500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.568885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.568914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.569315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.569345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.569706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.569736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.570093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.570122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.570344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.570373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.570583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.570612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.570881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.570912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.571293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.571324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.571585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.571613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.571700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.571728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.571960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.571990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.572355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.572387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.572767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.572796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.573172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.573202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.573577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.573609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.573965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.573994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.574267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.574297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.574645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.574675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.574933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.574962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.575329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.575359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.575738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.575768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.576025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.576058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.576318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.576348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.576703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.576732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.576954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.576983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.577215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.577244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.577501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.577530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.577762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.577791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.578011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.578042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.578312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.578343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.578771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.578801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.579006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.579036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.579435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.579466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.966 [2024-10-13 14:35:29.579681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.966 [2024-10-13 14:35:29.579718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.966 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.580082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.580113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.580363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.580394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.580748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.580778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.581159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.581190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.581484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.581515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.581824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.581853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.582153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.582183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.582551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.582581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.582961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.582990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.583374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.583404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.583773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.583804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.584164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.584196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.584567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.584599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.584976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.585005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.585399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.585431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.585800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.585831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.586176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.586208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.586467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.586496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.586735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.586764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.587027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.587057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.587443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.587483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.587691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.587723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.588081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.588112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.588550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.588583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.588936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.588966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.589373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.589404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.589648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.589677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.589873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.589910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.590116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.590147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.590520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.590550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.590809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.590841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.591219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.591254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.591357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.591386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.591733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.591763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.592127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.592158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.967 qpair failed and we were unable to recover it. 00:39:25.967 [2024-10-13 14:35:29.592255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.967 [2024-10-13 14:35:29.592283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.592511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.592539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.592903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.592932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.593158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.593190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.593401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.593436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.593803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.593834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.594247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.594277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.594497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.594525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.594953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.594983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.595353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.595384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.595752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.595781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.595992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.596023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.596220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.596251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.596496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.596526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.596769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.596800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.596971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.597001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.597430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.597461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.597828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.597859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.598118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.598151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.598556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.598588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.598953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.598984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.599412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.599445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.599693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.599723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.600087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.600119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.600489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.600519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.600818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.600847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.601073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.601104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.601482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.601514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.601816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.601846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.602192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.602222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.602580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.602609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.602958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.602989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.603352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.603383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.603755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.603785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.604158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.604188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.604601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.604631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.605004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.605034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.605413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.605444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.605810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.605840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.606214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.606244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.968 [2024-10-13 14:35:29.606486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.968 [2024-10-13 14:35:29.606518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.968 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.606868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.606897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.607173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.607202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.607571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.607601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.607957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.607992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.608340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.608373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.608547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.608576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.608940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.608971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.609184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.609216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.609613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.609645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.609862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.609892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.610142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.610173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.610521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.610552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.610913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.610944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.611290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.611323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.611674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.611703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.612083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.612113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.612367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.612396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.612694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.612727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.613095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.613127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.613481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.613512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.613873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.613903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.614274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.614307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.614733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.614763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.615121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.615155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.615415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.615446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.615830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.615861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.616261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.616291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.616637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.616667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.616874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.616904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.617290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.617321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.617571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.617604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.617850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.617881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.618165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.618198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.618428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.618457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.618711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.618745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.618979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.619012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.619232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.619264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.619647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.619681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.620070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.620102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.969 [2024-10-13 14:35:29.620475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.969 [2024-10-13 14:35:29.620507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.969 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.620880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.620909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.621281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.621312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.621688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.621717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.621945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.621985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.622338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.622370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.622598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.622628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.622725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.622754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.623014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.623044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.623271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.623303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.623472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.623502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.623730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.623758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.624123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.624153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.624416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.624451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.624814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.624844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.625221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.625253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.625353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.625382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.625754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.625786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.626175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.626211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.626408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.626440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.626815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.626845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.627107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.627140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.627541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.627573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.627948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.627980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.628202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.628232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.628613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.628643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.628887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.628917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.629304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.629337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.629679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.629711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.629954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.629985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.630344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.630378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.630630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.630662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.630947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.630980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.631332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.631362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.631734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.631766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.632136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.632173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.632540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.632573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.632791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.632821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.633189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.633220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.633599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.633629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.633993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.970 [2024-10-13 14:35:29.634025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.970 qpair failed and we were unable to recover it. 00:39:25.970 [2024-10-13 14:35:29.634413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.634448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.634809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.634839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.635051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.635091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.635433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.635469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.635807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.635837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.636233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.636263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.636480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.636512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.636627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.636658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.637060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.637099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.637346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.637378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.637624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.637659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.637920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.637954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.638314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.638345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.638598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.638627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.639001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.639031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.639414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.639445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.639814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.639844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.640099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.640129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.640392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.640423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.640686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.640718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.640954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.640988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:25.971 [2024-10-13 14:35:29.641334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:25.971 [2024-10-13 14:35:29.641367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:25.971 qpair failed and we were unable to recover it. 00:39:26.243 [2024-10-13 14:35:29.641719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.243 [2024-10-13 14:35:29.641753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.243 qpair failed and we were unable to recover it. 00:39:26.243 [2024-10-13 14:35:29.641981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.243 [2024-10-13 14:35:29.642011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.243 qpair failed and we were unable to recover it. 00:39:26.243 [2024-10-13 14:35:29.642160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.243 [2024-10-13 14:35:29.642195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.243 qpair failed and we were unable to recover it. 00:39:26.243 [2024-10-13 14:35:29.642596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.243 [2024-10-13 14:35:29.642629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.243 qpair failed and we were unable to recover it. 00:39:26.243 [2024-10-13 14:35:29.642724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.243 [2024-10-13 14:35:29.642752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.243 qpair failed and we were unable to recover it. 00:39:26.243 [2024-10-13 14:35:29.643035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.243 [2024-10-13 14:35:29.643074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.243 qpair failed and we were unable to recover it. 00:39:26.243 [2024-10-13 14:35:29.643468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.243 [2024-10-13 14:35:29.643500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.243 qpair failed and we were unable to recover it. 00:39:26.243 [2024-10-13 14:35:29.643816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.243 [2024-10-13 14:35:29.643847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.243 qpair failed and we were unable to recover it. 00:39:26.243 [2024-10-13 14:35:29.644216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.243 [2024-10-13 14:35:29.644248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.243 qpair failed and we were unable to recover it. 00:39:26.243 [2024-10-13 14:35:29.644627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.243 [2024-10-13 14:35:29.644658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.243 qpair failed and we were unable to recover it. 00:39:26.243 [2024-10-13 14:35:29.645028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.243 [2024-10-13 14:35:29.645057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.243 qpair failed and we were unable to recover it. 00:39:26.243 [2024-10-13 14:35:29.645459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.243 [2024-10-13 14:35:29.645490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.243 qpair failed and we were unable to recover it. 00:39:26.243 [2024-10-13 14:35:29.645921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.243 [2024-10-13 14:35:29.645952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.243 qpair failed and we were unable to recover it. 00:39:26.243 [2024-10-13 14:35:29.646161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.243 [2024-10-13 14:35:29.646194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.243 qpair failed and we were unable to recover it. 00:39:26.243 [2024-10-13 14:35:29.646542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.243 [2024-10-13 14:35:29.646580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.646932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.646966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.647326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.647357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.647727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.647760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.648123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.648155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.648389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.648420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.648681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.648711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.649105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.649142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.649484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.649517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.649893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.649923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.650294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.650327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.650692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.650721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.651090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.651122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.651486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.651516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.651733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.651763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.652116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.652145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.652516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.652545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.652915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.652946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.653320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.653350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.653697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.653727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.654100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.654130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.654496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.654526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.654878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.654908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.655110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.655141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.655526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.655555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.655902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.655932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.656155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.656186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.656422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.656452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.656813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.656842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.657078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.657108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.657331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.657362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.657728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.657757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.658125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.658155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.658526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.658558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.658933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.658962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.659380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.659410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.659767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.659797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.660010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.660040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.660275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.660305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.660395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.660422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.244 [2024-10-13 14:35:29.660650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.244 [2024-10-13 14:35:29.660680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.244 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.661037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.661075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.661285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.661319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.661705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.661735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.662116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.662147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.662538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.662568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.662896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.662927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.663142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.663179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.663526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.663557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.663929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.663958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.664325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.664356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.664559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.664590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.664946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.664976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.665337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.665368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.665739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.665769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.666137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.666168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.666543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.666574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.666825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.666855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.667222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.667253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.667495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.667529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.667874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.667906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.668290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.668321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.668538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.668568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.668811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.668840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.669195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.669225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.669608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.669638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.669844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.669873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.670246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.670278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.670642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.670671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.671057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.671097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.671439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.671468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.671679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.671708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.672081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.672113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.672477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.672508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.672876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.672905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.673271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.673301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.673678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.673709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.674103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.674133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.674346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.674376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.674631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.674664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.674900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.245 [2024-10-13 14:35:29.674932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.245 qpair failed and we were unable to recover it. 00:39:26.245 [2024-10-13 14:35:29.675296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.675327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.675571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.675604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.675876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.675906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.676152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.676182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.676399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.676429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.676672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.676700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.676926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.676962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.677331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.677363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.677581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.677612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.677859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.677889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.678124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.678157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.678381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.678412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.678802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.678832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.679092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.679122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.679481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.679512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.679759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.679788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.680189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.680219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.680599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.680628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.680844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.680873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.681251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.681280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.681655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.681684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.682048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.682088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.682434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.682463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.682818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.682848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.683208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.683238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.683613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.683642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.683919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.683947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.684170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.684200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.684413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.684442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.684884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.684913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.685163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.685191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.685562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.685591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.685833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.685865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.686028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.686061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.686439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.686469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.686763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.686796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.687250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.687281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.687655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.687684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.687893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.687922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.688306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.246 [2024-10-13 14:35:29.688336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.246 qpair failed and we were unable to recover it. 00:39:26.246 [2024-10-13 14:35:29.688721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.688749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.688871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.688898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.689137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.689170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.689387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.689416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.689621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.689649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.689929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.689958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.690193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.690228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.690490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.690519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.690764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.690794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.691120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.691152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.691529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.691560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.691950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.691979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.692328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.692364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.692566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.692595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.692944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.692973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.693236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.693265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.693633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.693661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.694024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.694053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.694419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.694449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.694661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.694689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.694940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.694969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.695333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.695364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.695713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.695743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.696124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.696154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.696528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.696559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.696915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.696944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.697344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.697374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.697593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.697622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.698006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.698036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.698426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.698458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.698826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.698854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.699097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.699127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.699472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.699501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.699883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.699913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.247 qpair failed and we were unable to recover it. 00:39:26.247 [2024-10-13 14:35:29.700285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.247 [2024-10-13 14:35:29.700315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.700709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.700738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.701115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.701148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.701439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.701467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.701839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.701868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.702294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.702324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.702708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.702737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.703124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.703184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.703554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.703586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.703817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.703846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.704100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.704129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.704221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.704248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.704433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.704469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.704699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.704729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.705075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.705107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.705375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.705412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.705789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.705819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.706025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.706054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.706424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.706455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.706668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.706697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.706910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.706941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.707195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.707226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.707469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.707498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.707723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.707752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.708162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.708191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.708557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.708586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.708956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.708988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.709123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.709155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.709608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.709638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.710073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.710103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.710313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.710342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.710674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.710703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.710931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.710959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.711170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.711200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.711534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.711563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.711845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.711873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.712154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.712184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.712580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.712610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.713024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.713053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.713446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.713478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.248 [2024-10-13 14:35:29.713847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.248 [2024-10-13 14:35:29.713877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.248 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.714232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.714262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.714488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.714516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.714879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.714907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.715362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.715391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.715595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.715624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.715856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.715893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.716266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.716296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.716664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.716692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.717093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.717123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.717358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.717387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.717822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.717852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.718342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.718377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.718740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.718770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.718974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.719003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.719350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.719381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.719766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.719796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.720177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.720208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.720539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.720569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.720825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.720854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:26.249 [2024-10-13 14:35:29.721117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.721147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.721242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.721269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.721369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.721397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.721630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.721659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:26.249 [2024-10-13 14:35:29.721758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.721785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:26.249 [2024-10-13 14:35:29.722023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c22b0 is same with the state(6) to be set 00:39:26.249 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:26.249 [2024-10-13 14:35:29.722799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.722903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.723335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.723439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.723901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.723938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.724458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.724559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.725022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.725060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.725469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.725502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.725746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.725780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.725894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.725923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.726134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.726165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.726422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.726452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.726674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.726708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.726941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.726973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.727315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.727347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.249 qpair failed and we were unable to recover it. 00:39:26.249 [2024-10-13 14:35:29.727578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.249 [2024-10-13 14:35:29.727609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.727820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.727849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.728216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.728248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.728631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.728661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.729033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.729072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.729441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.729471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.729685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.729714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.730080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.730110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.730349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.730383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.730851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.730882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.731246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.731278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.731692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.731723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.732094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.732125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.732531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.732561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.732938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.732968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.733369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.733401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.733765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.733795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.734149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.734180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.734563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.734593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.734965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.734995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.735332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.735362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.735748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.735778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.736143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.736173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.736550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.736584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.736952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.736985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.737340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.737377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.737770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.737799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.738015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.738044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.738275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.738304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.738687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.738716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.738979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.739007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.739245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.739278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.739657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.739686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.739964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.739994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.740357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.740389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.740609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.740638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.740858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.740886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.741256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.741287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.741635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.741666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.741893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.741922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.250 [2024-10-13 14:35:29.742244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.250 [2024-10-13 14:35:29.742275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.250 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.742376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.742407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.742763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.742794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.743013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.743042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.743406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.743436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.743823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.743853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.744229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.744259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.744637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.744667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.744814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.744842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.745120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.745149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.745506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.745535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.745914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.745945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.746302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.746333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.746701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.746731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.746994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.747025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.747156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.747194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.747444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.747476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.747700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.747730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.747958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.747988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.748242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.748276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.748513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.748546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.748781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.748812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.749046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.749101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.749484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.749515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.749732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.749760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.750094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.750131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.750496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.750526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.750900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.750930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.751304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.751334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.751569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.751601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.751849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.751879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.752224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.752255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.752631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.752660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.753026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.753055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.753407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.753437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.753774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.753803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.754199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.754230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.754617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.754647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.755023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.755052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.755462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.755494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.755875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.251 [2024-10-13 14:35:29.755905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.251 qpair failed and we were unable to recover it. 00:39:26.251 [2024-10-13 14:35:29.756261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.756292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.756711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.756741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.756988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.757019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.757412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.757444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.757544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.757572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.757840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.757873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.758098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.758129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.758334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.758363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.758775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.758805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.759045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.759085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.759185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.759213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.759453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.759488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.759864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.759894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.760047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.760087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.760449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.760478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.760717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.760751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.761104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.761135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.761223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.761254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5534000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.761821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.761926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.762556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.762663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.763353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.763458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.763761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.763799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.764145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.764178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.764558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.764587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.764949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.764990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:26.252 [2024-10-13 14:35:29.765398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.765431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:26.252 [2024-10-13 14:35:29.765801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.765837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.252 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:26.252 [2024-10-13 14:35:29.766209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.766240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.766462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.766492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.766740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.766774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.767049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.767091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.767468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.767499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.767851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.767880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.768250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.252 [2024-10-13 14:35:29.768280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.252 qpair failed and we were unable to recover it. 00:39:26.252 [2024-10-13 14:35:29.768653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.768683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.769053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.769096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.769467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.769497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.769751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.769783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.770158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.770190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.770557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.770587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.770955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.770985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.771218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.771248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.771646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.771676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.772048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.772086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.772365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.772394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.772617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.772646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.772870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.772900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.773102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.773133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.773531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.773561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.773782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.773810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.774075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.774106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.774468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.774496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.774866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.774898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.775158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.775188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.775587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.775617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.775984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.776013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.776269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.776300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.776675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.776703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.777091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.777121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.777488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.777517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.777903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.777931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.778163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.778193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.778535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.778577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.778933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.778969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.779323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.779355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.779609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.779638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.779873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.779905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.780296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.780328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.780685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.780716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.780978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.781008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.781412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.781442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.781650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.781678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.782008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.782039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.782409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.253 [2024-10-13 14:35:29.782440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.253 qpair failed and we were unable to recover it. 00:39:26.253 [2024-10-13 14:35:29.782801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.782831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.783207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.783237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.783627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.783656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.784040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.784086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.784469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.784498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.784874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.784903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.785130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.785163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.785550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.785579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.785929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.785960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.786342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.786372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.786575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.786604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.786975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.787005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.787386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.787417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.787793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.787821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.788219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.788249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.788470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.788499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.788850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.788879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.789232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.789264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.789645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.789674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.790027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.790057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.790499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.790529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.790866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.790896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.791264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.791294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.791688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.791718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.791955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.791983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.792210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.792240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.792616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.792644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.793011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.793039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.793401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.793437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.793666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.793699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.794017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.794054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.794439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.794470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.794820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.794850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.795193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.795224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.795596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.795625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.796008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.796036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.796213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.796244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.796610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.796640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.796883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.796913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.797132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.797162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.254 [2024-10-13 14:35:29.797431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.254 [2024-10-13 14:35:29.797461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.254 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.797858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.797886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.798263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.798293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.798549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.798578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.798949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.798978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.799193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.799223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.799582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.799612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.799974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.800004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.800224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.800254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.800639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.800669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.801041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.801078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.801434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.801465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.801838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.801866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.802264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.802296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.802700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.802730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.803119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.803153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.803526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.803555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.803771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.803800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.804147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.804177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.804525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.804554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.804942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.804973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.805322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.805353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.805734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.805765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.806096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.806126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.806357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.806386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5540000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 Malloc0 00:39:26.255 [2024-10-13 14:35:29.806934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.807038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.255 [2024-10-13 14:35:29.807570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.807672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:39:26.255 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.255 [2024-10-13 14:35:29.808340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.808444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:26.255 [2024-10-13 14:35:29.808903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.808939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.809507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.809610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.810100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.810140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.810515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.810545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.810867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.810897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.811133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.811166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.811513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.811542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.811895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.811924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.812390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.812421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.812677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.812705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.813061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.813103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.255 qpair failed and we were unable to recover it. 00:39:26.255 [2024-10-13 14:35:29.813482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.255 [2024-10-13 14:35:29.813512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.813769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.813799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.814013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.814042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.814087] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:26.256 [2024-10-13 14:35:29.814306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.814337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.814698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.814728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.815104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.815136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.815409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.815439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.815665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.815693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.815892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.815926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.816085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.816116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.816384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.816418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.816577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.816610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.816852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.816891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.817158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.817189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.817566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.817597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.817841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.817871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.818111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.818140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.818368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.818397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.818574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.818602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.818912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.818948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.819372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.819404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.819641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.819673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.819901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.819931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.820208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.820240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.820495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.820525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.820906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.820936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.821083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.821113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.821358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.821394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.821757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.821787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.822167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.822201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.822437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.822475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 [2024-10-13 14:35:29.822864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.822894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.256 [2024-10-13 14:35:29.823396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.823429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:26.256 [2024-10-13 14:35:29.823786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.256 [2024-10-13 14:35:29.823817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.256 qpair failed and we were unable to recover it. 00:39:26.256 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.256 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:26.257 [2024-10-13 14:35:29.824286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.824316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.824667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.824697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.825084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.825116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.825490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.825520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.825746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.825774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.826149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.826179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.826630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.826659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.826907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.826936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.827306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.827337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.827596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.827625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.828007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.828037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.828383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.828423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.828768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.828797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.829052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.829091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.829488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.829517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.829864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.829894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.830244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.830274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.830655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.830685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.831074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.831107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.831448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.831479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.831852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.831885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.832129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.832159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.832535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.832564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.832814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.832844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.833214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.833247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.833463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.833494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.833859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.833889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.834285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.834317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.834537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.834569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.834948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.834980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.257 [2024-10-13 14:35:29.835227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.835259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:26.257 [2024-10-13 14:35:29.835644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.835675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.257 [2024-10-13 14:35:29.835888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.835919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:26.257 [2024-10-13 14:35:29.836294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.836326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.836547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.836578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.836952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.836982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.837344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.837375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.837628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.837658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.257 qpair failed and we were unable to recover it. 00:39:26.257 [2024-10-13 14:35:29.837888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.257 [2024-10-13 14:35:29.837918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.838293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.838323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.838469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.838499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.838882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.838913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.839296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.839328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.839707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.839738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.840096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.840127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.840512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.840542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.840943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.840975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.841312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.841345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.841696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.841726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.842007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.842038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.842277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.842310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.842692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.842723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.843095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.843127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.843347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.843377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.843736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.843768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.844018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.844048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.844478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.844516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.844872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.844902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.845174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.845206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.845571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.845600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.845822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.845858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.846228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.846260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.846621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.846652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.846744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.846773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.847027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.847057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.258 [2024-10-13 14:35:29.847464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.847496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:26.258 [2024-10-13 14:35:29.847864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.258 [2024-10-13 14:35:29.847894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.847996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.848026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:26.258 [2024-10-13 14:35:29.848415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.848447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.848826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.848857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.849225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.849258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.849646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.849675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.850050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.850089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.850451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.850482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.850843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.850873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.851225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.851256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.851469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.258 [2024-10-13 14:35:29.851501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.258 qpair failed and we were unable to recover it. 00:39:26.258 [2024-10-13 14:35:29.851864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.259 [2024-10-13 14:35:29.851892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.259 qpair failed and we were unable to recover it. 00:39:26.259 [2024-10-13 14:35:29.852108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.259 [2024-10-13 14:35:29.852138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.259 qpair failed and we were unable to recover it. 00:39:26.259 [2024-10-13 14:35:29.852375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.259 [2024-10-13 14:35:29.852403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.259 qpair failed and we were unable to recover it. 00:39:26.259 [2024-10-13 14:35:29.852678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.259 [2024-10-13 14:35:29.852708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.259 qpair failed and we were unable to recover it. 00:39:26.259 [2024-10-13 14:35:29.852833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.259 [2024-10-13 14:35:29.852865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.259 qpair failed and we were unable to recover it. 00:39:26.259 [2024-10-13 14:35:29.853093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.259 [2024-10-13 14:35:29.853126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.259 qpair failed and we were unable to recover it. 00:39:26.259 [2024-10-13 14:35:29.853395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.259 [2024-10-13 14:35:29.853423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.259 qpair failed and we were unable to recover it. 00:39:26.259 [2024-10-13 14:35:29.853793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.259 [2024-10-13 14:35:29.853821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.259 qpair failed and we were unable to recover it. 00:39:26.259 [2024-10-13 14:35:29.854096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:26.259 [2024-10-13 14:35:29.854128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5538000b90 with addr=10.0.0.2, port=4420 00:39:26.259 qpair failed and we were unable to recover it. 00:39:26.259 [2024-10-13 14:35:29.854425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:26.259 [2024-10-13 14:35:29.855571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.259 [2024-10-13 14:35:29.855711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.259 [2024-10-13 14:35:29.855765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.259 [2024-10-13 14:35:29.855788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.259 [2024-10-13 14:35:29.855812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.259 [2024-10-13 14:35:29.855863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.259 qpair failed and we were unable to recover it. 00:39:26.259 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.259 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:26.259 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.259 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:26.259 [2024-10-13 14:35:29.865125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.259 [2024-10-13 14:35:29.865233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.259 [2024-10-13 14:35:29.865277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.259 [2024-10-13 14:35:29.865303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.259 [2024-10-13 14:35:29.865323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.259 [2024-10-13 14:35:29.865368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.259 qpair failed and we were unable to recover it. 00:39:26.259 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.259 14:35:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1990336 00:39:26.259 [2024-10-13 14:35:29.875085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.259 [2024-10-13 14:35:29.875186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.259 [2024-10-13 14:35:29.875217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.259 [2024-10-13 14:35:29.875233] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.259 [2024-10-13 14:35:29.875249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.259 [2024-10-13 14:35:29.875281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.259 qpair failed and we were unable to recover it. 00:39:26.259 [2024-10-13 14:35:29.885054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.259 [2024-10-13 14:35:29.885149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.259 [2024-10-13 14:35:29.885171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.259 [2024-10-13 14:35:29.885183] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.259 [2024-10-13 14:35:29.885196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.259 [2024-10-13 14:35:29.885219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.259 qpair failed and we were unable to recover it. 00:39:26.259 [2024-10-13 14:35:29.895061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.259 [2024-10-13 14:35:29.895151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.259 [2024-10-13 14:35:29.895168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.259 [2024-10-13 14:35:29.895176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.259 [2024-10-13 14:35:29.895183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.259 [2024-10-13 14:35:29.895199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.259 qpair failed and we were unable to recover it. 00:39:26.259 [2024-10-13 14:35:29.904972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.259 [2024-10-13 14:35:29.905041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.259 [2024-10-13 14:35:29.905056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.259 [2024-10-13 14:35:29.905070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.259 [2024-10-13 14:35:29.905077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.259 [2024-10-13 14:35:29.905093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.259 qpair failed and we were unable to recover it. 00:39:26.259 [2024-10-13 14:35:29.914956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.259 [2024-10-13 14:35:29.915014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.259 [2024-10-13 14:35:29.915035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.259 [2024-10-13 14:35:29.915043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.259 [2024-10-13 14:35:29.915049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.259 [2024-10-13 14:35:29.915070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.259 qpair failed and we were unable to recover it. 00:39:26.259 [2024-10-13 14:35:29.924980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.259 [2024-10-13 14:35:29.925050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.259 [2024-10-13 14:35:29.925075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.259 [2024-10-13 14:35:29.925083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.259 [2024-10-13 14:35:29.925089] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.259 [2024-10-13 14:35:29.925105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.259 qpair failed and we were unable to recover it. 00:39:26.259 [2024-10-13 14:35:29.935048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.259 [2024-10-13 14:35:29.935133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.259 [2024-10-13 14:35:29.935151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.259 [2024-10-13 14:35:29.935158] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.259 [2024-10-13 14:35:29.935164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.259 [2024-10-13 14:35:29.935180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.259 qpair failed and we were unable to recover it. 00:39:26.523 [2024-10-13 14:35:29.945034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.523 [2024-10-13 14:35:29.945132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.523 [2024-10-13 14:35:29.945149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.523 [2024-10-13 14:35:29.945157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.523 [2024-10-13 14:35:29.945163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.523 [2024-10-13 14:35:29.945179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-10-13 14:35:29.954919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.523 [2024-10-13 14:35:29.954980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.523 [2024-10-13 14:35:29.954997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.523 [2024-10-13 14:35:29.955004] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.523 [2024-10-13 14:35:29.955018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.523 [2024-10-13 14:35:29.955034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-10-13 14:35:29.965021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.523 [2024-10-13 14:35:29.965096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.523 [2024-10-13 14:35:29.965113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.523 [2024-10-13 14:35:29.965120] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.523 [2024-10-13 14:35:29.965126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.523 [2024-10-13 14:35:29.965142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-10-13 14:35:29.975053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.523 [2024-10-13 14:35:29.975122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.523 [2024-10-13 14:35:29.975139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.523 [2024-10-13 14:35:29.975146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.523 [2024-10-13 14:35:29.975153] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.523 [2024-10-13 14:35:29.975168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.523 qpair failed and we were unable to recover it. 00:39:26.523 [2024-10-13 14:35:29.985009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.523 [2024-10-13 14:35:29.985076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.523 [2024-10-13 14:35:29.985093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.523 [2024-10-13 14:35:29.985100] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.523 [2024-10-13 14:35:29.985107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.523 [2024-10-13 14:35:29.985123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-10-13 14:35:29.994997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.524 [2024-10-13 14:35:29.995058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.524 [2024-10-13 14:35:29.995081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.524 [2024-10-13 14:35:29.995088] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.524 [2024-10-13 14:35:29.995094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.524 [2024-10-13 14:35:29.995110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-10-13 14:35:30.005040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.524 [2024-10-13 14:35:30.005130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.524 [2024-10-13 14:35:30.005156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.524 [2024-10-13 14:35:30.005163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.524 [2024-10-13 14:35:30.005170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.524 [2024-10-13 14:35:30.005189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-10-13 14:35:30.014944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.524 [2024-10-13 14:35:30.015015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.524 [2024-10-13 14:35:30.015034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.524 [2024-10-13 14:35:30.015042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.524 [2024-10-13 14:35:30.015048] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.524 [2024-10-13 14:35:30.015077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-10-13 14:35:30.024959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.524 [2024-10-13 14:35:30.025043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.524 [2024-10-13 14:35:30.025060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.524 [2024-10-13 14:35:30.025076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.524 [2024-10-13 14:35:30.025083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.524 [2024-10-13 14:35:30.025099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-10-13 14:35:30.035086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.524 [2024-10-13 14:35:30.035153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.524 [2024-10-13 14:35:30.035171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.524 [2024-10-13 14:35:30.035179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.524 [2024-10-13 14:35:30.035187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.524 [2024-10-13 14:35:30.035202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-10-13 14:35:30.045079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.524 [2024-10-13 14:35:30.045149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.524 [2024-10-13 14:35:30.045165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.524 [2024-10-13 14:35:30.045173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.524 [2024-10-13 14:35:30.045184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.524 [2024-10-13 14:35:30.045200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-10-13 14:35:30.055137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.524 [2024-10-13 14:35:30.055211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.524 [2024-10-13 14:35:30.055229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.524 [2024-10-13 14:35:30.055237] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.524 [2024-10-13 14:35:30.055244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.524 [2024-10-13 14:35:30.055261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-10-13 14:35:30.065098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.524 [2024-10-13 14:35:30.065170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.524 [2024-10-13 14:35:30.065187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.524 [2024-10-13 14:35:30.065195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.524 [2024-10-13 14:35:30.065201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.524 [2024-10-13 14:35:30.065217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-10-13 14:35:30.075104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.524 [2024-10-13 14:35:30.075168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.524 [2024-10-13 14:35:30.075186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.524 [2024-10-13 14:35:30.075194] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.524 [2024-10-13 14:35:30.075200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.524 [2024-10-13 14:35:30.075217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-10-13 14:35:30.085106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.524 [2024-10-13 14:35:30.085201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.524 [2024-10-13 14:35:30.085241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.524 [2024-10-13 14:35:30.085253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.524 [2024-10-13 14:35:30.085260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.524 [2024-10-13 14:35:30.085289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-10-13 14:35:30.095068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.524 [2024-10-13 14:35:30.095147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.524 [2024-10-13 14:35:30.095167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.524 [2024-10-13 14:35:30.095175] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.524 [2024-10-13 14:35:30.095183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.524 [2024-10-13 14:35:30.095200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-10-13 14:35:30.105109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.524 [2024-10-13 14:35:30.105176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.524 [2024-10-13 14:35:30.105193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.524 [2024-10-13 14:35:30.105200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.524 [2024-10-13 14:35:30.105207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.524 [2024-10-13 14:35:30.105223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-10-13 14:35:30.115090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.524 [2024-10-13 14:35:30.115155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.524 [2024-10-13 14:35:30.115172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.524 [2024-10-13 14:35:30.115179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.524 [2024-10-13 14:35:30.115185] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.524 [2024-10-13 14:35:30.115203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-10-13 14:35:30.125123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.524 [2024-10-13 14:35:30.125197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.524 [2024-10-13 14:35:30.125215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.524 [2024-10-13 14:35:30.125222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.524 [2024-10-13 14:35:30.125228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.524 [2024-10-13 14:35:30.125244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.524 qpair failed and we were unable to recover it. 00:39:26.524 [2024-10-13 14:35:30.135180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.525 [2024-10-13 14:35:30.135248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.525 [2024-10-13 14:35:30.135264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.525 [2024-10-13 14:35:30.135277] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.525 [2024-10-13 14:35:30.135283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.525 [2024-10-13 14:35:30.135299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-10-13 14:35:30.145181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.525 [2024-10-13 14:35:30.145247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.525 [2024-10-13 14:35:30.145263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.525 [2024-10-13 14:35:30.145271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.525 [2024-10-13 14:35:30.145278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.525 [2024-10-13 14:35:30.145295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-10-13 14:35:30.155178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.525 [2024-10-13 14:35:30.155245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.525 [2024-10-13 14:35:30.155262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.525 [2024-10-13 14:35:30.155269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.525 [2024-10-13 14:35:30.155277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.525 [2024-10-13 14:35:30.155293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-10-13 14:35:30.165179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.525 [2024-10-13 14:35:30.165245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.525 [2024-10-13 14:35:30.165264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.525 [2024-10-13 14:35:30.165272] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.525 [2024-10-13 14:35:30.165280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.525 [2024-10-13 14:35:30.165297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-10-13 14:35:30.175211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.525 [2024-10-13 14:35:30.175285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.525 [2024-10-13 14:35:30.175302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.525 [2024-10-13 14:35:30.175311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.525 [2024-10-13 14:35:30.175318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.525 [2024-10-13 14:35:30.175335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-10-13 14:35:30.185184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.525 [2024-10-13 14:35:30.185267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.525 [2024-10-13 14:35:30.185283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.525 [2024-10-13 14:35:30.185291] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.525 [2024-10-13 14:35:30.185298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.525 [2024-10-13 14:35:30.185314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-10-13 14:35:30.195192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.525 [2024-10-13 14:35:30.195288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.525 [2024-10-13 14:35:30.195304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.525 [2024-10-13 14:35:30.195311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.525 [2024-10-13 14:35:30.195318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.525 [2024-10-13 14:35:30.195335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-10-13 14:35:30.205207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.525 [2024-10-13 14:35:30.205269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.525 [2024-10-13 14:35:30.205286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.525 [2024-10-13 14:35:30.205294] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.525 [2024-10-13 14:35:30.205300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.525 [2024-10-13 14:35:30.205316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-10-13 14:35:30.215256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.525 [2024-10-13 14:35:30.215329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.525 [2024-10-13 14:35:30.215346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.525 [2024-10-13 14:35:30.215353] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.525 [2024-10-13 14:35:30.215360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.525 [2024-10-13 14:35:30.215376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.525 [2024-10-13 14:35:30.225224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.525 [2024-10-13 14:35:30.225312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.525 [2024-10-13 14:35:30.225328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.525 [2024-10-13 14:35:30.225340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.525 [2024-10-13 14:35:30.225347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.525 [2024-10-13 14:35:30.225364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.525 qpair failed and we were unable to recover it. 00:39:26.788 [2024-10-13 14:35:30.235207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.788 [2024-10-13 14:35:30.235267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.788 [2024-10-13 14:35:30.235284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.788 [2024-10-13 14:35:30.235292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.788 [2024-10-13 14:35:30.235299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.788 [2024-10-13 14:35:30.235315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.788 qpair failed and we were unable to recover it. 00:39:26.788 [2024-10-13 14:35:30.245248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.788 [2024-10-13 14:35:30.245321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.788 [2024-10-13 14:35:30.245337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.788 [2024-10-13 14:35:30.245346] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.788 [2024-10-13 14:35:30.245353] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.788 [2024-10-13 14:35:30.245369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.788 qpair failed and we were unable to recover it. 00:39:26.788 [2024-10-13 14:35:30.255161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.788 [2024-10-13 14:35:30.255234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.788 [2024-10-13 14:35:30.255255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.788 [2024-10-13 14:35:30.255265] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.788 [2024-10-13 14:35:30.255275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.788 [2024-10-13 14:35:30.255295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.788 qpair failed and we were unable to recover it. 00:39:26.788 [2024-10-13 14:35:30.265141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.788 [2024-10-13 14:35:30.265200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.788 [2024-10-13 14:35:30.265219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.788 [2024-10-13 14:35:30.265227] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.788 [2024-10-13 14:35:30.265233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.788 [2024-10-13 14:35:30.265250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.788 qpair failed and we were unable to recover it. 00:39:26.788 [2024-10-13 14:35:30.275296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.788 [2024-10-13 14:35:30.275356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.788 [2024-10-13 14:35:30.275374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.788 [2024-10-13 14:35:30.275381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.788 [2024-10-13 14:35:30.275387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.788 [2024-10-13 14:35:30.275403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.788 qpair failed and we were unable to recover it. 00:39:26.788 [2024-10-13 14:35:30.285275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.788 [2024-10-13 14:35:30.285338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.788 [2024-10-13 14:35:30.285355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.788 [2024-10-13 14:35:30.285362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.788 [2024-10-13 14:35:30.285368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.789 [2024-10-13 14:35:30.285384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.789 qpair failed and we were unable to recover it. 00:39:26.789 [2024-10-13 14:35:30.295281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.789 [2024-10-13 14:35:30.295347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.789 [2024-10-13 14:35:30.295364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.789 [2024-10-13 14:35:30.295371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.789 [2024-10-13 14:35:30.295378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.789 [2024-10-13 14:35:30.295393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.789 qpair failed and we were unable to recover it. 00:39:26.789 [2024-10-13 14:35:30.305312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.789 [2024-10-13 14:35:30.305383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.789 [2024-10-13 14:35:30.305400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.789 [2024-10-13 14:35:30.305407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.789 [2024-10-13 14:35:30.305414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.789 [2024-10-13 14:35:30.305429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.789 qpair failed and we were unable to recover it. 00:39:26.789 [2024-10-13 14:35:30.315275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.789 [2024-10-13 14:35:30.315379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.789 [2024-10-13 14:35:30.315400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.789 [2024-10-13 14:35:30.315407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.789 [2024-10-13 14:35:30.315413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.789 [2024-10-13 14:35:30.315429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.789 qpair failed and we were unable to recover it. 00:39:26.789 [2024-10-13 14:35:30.325294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.789 [2024-10-13 14:35:30.325400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.789 [2024-10-13 14:35:30.325417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.789 [2024-10-13 14:35:30.325425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.789 [2024-10-13 14:35:30.325431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.789 [2024-10-13 14:35:30.325447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.789 qpair failed and we were unable to recover it. 00:39:26.789 [2024-10-13 14:35:30.335224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.789 [2024-10-13 14:35:30.335304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.789 [2024-10-13 14:35:30.335320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.789 [2024-10-13 14:35:30.335327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.789 [2024-10-13 14:35:30.335334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.789 [2024-10-13 14:35:30.335349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.789 qpair failed and we were unable to recover it. 00:39:26.789 [2024-10-13 14:35:30.345385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.789 [2024-10-13 14:35:30.345445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.789 [2024-10-13 14:35:30.345462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.789 [2024-10-13 14:35:30.345469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.789 [2024-10-13 14:35:30.345476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.789 [2024-10-13 14:35:30.345491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.789 qpair failed and we were unable to recover it. 00:39:26.789 [2024-10-13 14:35:30.355317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.789 [2024-10-13 14:35:30.355377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.789 [2024-10-13 14:35:30.355393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.789 [2024-10-13 14:35:30.355401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.789 [2024-10-13 14:35:30.355407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.789 [2024-10-13 14:35:30.355427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.789 qpair failed and we were unable to recover it. 00:39:26.789 [2024-10-13 14:35:30.365340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.789 [2024-10-13 14:35:30.365407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.789 [2024-10-13 14:35:30.365423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.789 [2024-10-13 14:35:30.365430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.789 [2024-10-13 14:35:30.365437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.789 [2024-10-13 14:35:30.365452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.789 qpair failed and we were unable to recover it. 00:39:26.789 [2024-10-13 14:35:30.375362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.789 [2024-10-13 14:35:30.375436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.789 [2024-10-13 14:35:30.375452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.789 [2024-10-13 14:35:30.375459] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.789 [2024-10-13 14:35:30.375466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.789 [2024-10-13 14:35:30.375481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.789 qpair failed and we were unable to recover it. 00:39:26.789 [2024-10-13 14:35:30.385394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.789 [2024-10-13 14:35:30.385490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.789 [2024-10-13 14:35:30.385506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.789 [2024-10-13 14:35:30.385513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.789 [2024-10-13 14:35:30.385520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.789 [2024-10-13 14:35:30.385535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.789 qpair failed and we were unable to recover it. 00:39:26.789 [2024-10-13 14:35:30.395258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.789 [2024-10-13 14:35:30.395315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.789 [2024-10-13 14:35:30.395331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.789 [2024-10-13 14:35:30.395339] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.789 [2024-10-13 14:35:30.395345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.789 [2024-10-13 14:35:30.395360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.789 qpair failed and we were unable to recover it. 00:39:26.789 [2024-10-13 14:35:30.405233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.789 [2024-10-13 14:35:30.405301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.789 [2024-10-13 14:35:30.405323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.789 [2024-10-13 14:35:30.405330] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.789 [2024-10-13 14:35:30.405337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.789 [2024-10-13 14:35:30.405360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.789 qpair failed and we were unable to recover it. 00:39:26.789 [2024-10-13 14:35:30.415386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.789 [2024-10-13 14:35:30.415461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.789 [2024-10-13 14:35:30.415477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.789 [2024-10-13 14:35:30.415484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.789 [2024-10-13 14:35:30.415491] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.789 [2024-10-13 14:35:30.415506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.789 qpair failed and we were unable to recover it. 00:39:26.789 [2024-10-13 14:35:30.425364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.789 [2024-10-13 14:35:30.425432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.789 [2024-10-13 14:35:30.425448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.789 [2024-10-13 14:35:30.425455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.789 [2024-10-13 14:35:30.425462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.790 [2024-10-13 14:35:30.425477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.790 qpair failed and we were unable to recover it. 00:39:26.790 [2024-10-13 14:35:30.435247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.790 [2024-10-13 14:35:30.435310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.790 [2024-10-13 14:35:30.435326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.790 [2024-10-13 14:35:30.435333] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.790 [2024-10-13 14:35:30.435340] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.790 [2024-10-13 14:35:30.435355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.790 qpair failed and we were unable to recover it. 00:39:26.790 [2024-10-13 14:35:30.445381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.790 [2024-10-13 14:35:30.445444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.790 [2024-10-13 14:35:30.445461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.790 [2024-10-13 14:35:30.445468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.790 [2024-10-13 14:35:30.445474] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.790 [2024-10-13 14:35:30.445495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.790 qpair failed and we were unable to recover it. 00:39:26.790 [2024-10-13 14:35:30.455422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.790 [2024-10-13 14:35:30.455510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.790 [2024-10-13 14:35:30.455527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.790 [2024-10-13 14:35:30.455534] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.790 [2024-10-13 14:35:30.455540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.790 [2024-10-13 14:35:30.455556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.790 qpair failed and we were unable to recover it. 00:39:26.790 [2024-10-13 14:35:30.465382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.790 [2024-10-13 14:35:30.465462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.790 [2024-10-13 14:35:30.465480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.790 [2024-10-13 14:35:30.465487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.790 [2024-10-13 14:35:30.465493] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.790 [2024-10-13 14:35:30.465509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.790 qpair failed and we were unable to recover it. 00:39:26.790 [2024-10-13 14:35:30.475420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.790 [2024-10-13 14:35:30.475481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.790 [2024-10-13 14:35:30.475498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.790 [2024-10-13 14:35:30.475505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.790 [2024-10-13 14:35:30.475511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.790 [2024-10-13 14:35:30.475526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.790 qpair failed and we were unable to recover it. 00:39:26.790 [2024-10-13 14:35:30.485455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:26.790 [2024-10-13 14:35:30.485520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:26.790 [2024-10-13 14:35:30.485537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:26.790 [2024-10-13 14:35:30.485545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:26.790 [2024-10-13 14:35:30.485551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:26.790 [2024-10-13 14:35:30.485566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:26.790 qpair failed and we were unable to recover it. 00:39:27.053 [2024-10-13 14:35:30.495456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.053 [2024-10-13 14:35:30.495537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.053 [2024-10-13 14:35:30.495553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.053 [2024-10-13 14:35:30.495561] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.053 [2024-10-13 14:35:30.495567] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.053 [2024-10-13 14:35:30.495583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.053 qpair failed and we were unable to recover it. 00:39:27.053 [2024-10-13 14:35:30.505413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.053 [2024-10-13 14:35:30.505477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.053 [2024-10-13 14:35:30.505494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.053 [2024-10-13 14:35:30.505501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.053 [2024-10-13 14:35:30.505507] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.053 [2024-10-13 14:35:30.505522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.053 qpair failed and we were unable to recover it. 00:39:27.053 [2024-10-13 14:35:30.515416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.053 [2024-10-13 14:35:30.515467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.053 [2024-10-13 14:35:30.515484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.053 [2024-10-13 14:35:30.515491] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.053 [2024-10-13 14:35:30.515497] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.053 [2024-10-13 14:35:30.515512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.053 qpair failed and we were unable to recover it. 00:39:27.053 [2024-10-13 14:35:30.525443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.053 [2024-10-13 14:35:30.525507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.053 [2024-10-13 14:35:30.525523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.053 [2024-10-13 14:35:30.525530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.053 [2024-10-13 14:35:30.525536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.053 [2024-10-13 14:35:30.525551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.053 qpair failed and we were unable to recover it. 00:39:27.053 [2024-10-13 14:35:30.535460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.053 [2024-10-13 14:35:30.535535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.053 [2024-10-13 14:35:30.535552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.053 [2024-10-13 14:35:30.535559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.053 [2024-10-13 14:35:30.535570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.053 [2024-10-13 14:35:30.535586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.053 qpair failed and we were unable to recover it. 00:39:27.053 [2024-10-13 14:35:30.545318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.053 [2024-10-13 14:35:30.545383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.053 [2024-10-13 14:35:30.545400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.053 [2024-10-13 14:35:30.545407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.053 [2024-10-13 14:35:30.545414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.053 [2024-10-13 14:35:30.545429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.053 qpair failed and we were unable to recover it. 00:39:27.053 [2024-10-13 14:35:30.555428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.053 [2024-10-13 14:35:30.555488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.053 [2024-10-13 14:35:30.555505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.053 [2024-10-13 14:35:30.555512] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.053 [2024-10-13 14:35:30.555518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.053 [2024-10-13 14:35:30.555534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.053 qpair failed and we were unable to recover it. 00:39:27.053 [2024-10-13 14:35:30.565391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.053 [2024-10-13 14:35:30.565460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.053 [2024-10-13 14:35:30.565476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.053 [2024-10-13 14:35:30.565484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.053 [2024-10-13 14:35:30.565491] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.053 [2024-10-13 14:35:30.565506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.053 qpair failed and we were unable to recover it. 00:39:27.053 [2024-10-13 14:35:30.575526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.054 [2024-10-13 14:35:30.575601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.054 [2024-10-13 14:35:30.575617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.054 [2024-10-13 14:35:30.575624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.054 [2024-10-13 14:35:30.575630] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.054 [2024-10-13 14:35:30.575645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.054 qpair failed and we were unable to recover it. 00:39:27.054 [2024-10-13 14:35:30.585453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.054 [2024-10-13 14:35:30.585521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.054 [2024-10-13 14:35:30.585537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.054 [2024-10-13 14:35:30.585544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.054 [2024-10-13 14:35:30.585551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.054 [2024-10-13 14:35:30.585566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.054 qpair failed and we were unable to recover it. 00:39:27.054 [2024-10-13 14:35:30.595515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.054 [2024-10-13 14:35:30.595570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.054 [2024-10-13 14:35:30.595588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.054 [2024-10-13 14:35:30.595597] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.054 [2024-10-13 14:35:30.595606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.054 [2024-10-13 14:35:30.595622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.054 qpair failed and we were unable to recover it. 00:39:27.054 [2024-10-13 14:35:30.605528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.054 [2024-10-13 14:35:30.605595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.054 [2024-10-13 14:35:30.605611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.054 [2024-10-13 14:35:30.605618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.054 [2024-10-13 14:35:30.605625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.054 [2024-10-13 14:35:30.605640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.054 qpair failed and we were unable to recover it. 00:39:27.054 [2024-10-13 14:35:30.615532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.054 [2024-10-13 14:35:30.615604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.054 [2024-10-13 14:35:30.615621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.054 [2024-10-13 14:35:30.615627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.054 [2024-10-13 14:35:30.615634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.054 [2024-10-13 14:35:30.615649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.054 qpair failed and we were unable to recover it. 00:39:27.054 [2024-10-13 14:35:30.625524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.054 [2024-10-13 14:35:30.625602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.054 [2024-10-13 14:35:30.625618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.054 [2024-10-13 14:35:30.625630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.054 [2024-10-13 14:35:30.625637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.054 [2024-10-13 14:35:30.625652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.054 qpair failed and we were unable to recover it. 00:39:27.054 [2024-10-13 14:35:30.635472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.054 [2024-10-13 14:35:30.635539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.054 [2024-10-13 14:35:30.635555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.054 [2024-10-13 14:35:30.635562] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.054 [2024-10-13 14:35:30.635569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.054 [2024-10-13 14:35:30.635584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.054 qpair failed and we were unable to recover it. 00:39:27.054 [2024-10-13 14:35:30.645429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.054 [2024-10-13 14:35:30.645498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.054 [2024-10-13 14:35:30.645514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.054 [2024-10-13 14:35:30.645522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.054 [2024-10-13 14:35:30.645528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.054 [2024-10-13 14:35:30.645543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.054 qpair failed and we were unable to recover it. 00:39:27.054 [2024-10-13 14:35:30.655583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.054 [2024-10-13 14:35:30.655654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.054 [2024-10-13 14:35:30.655670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.054 [2024-10-13 14:35:30.655677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.054 [2024-10-13 14:35:30.655684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.054 [2024-10-13 14:35:30.655699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.054 qpair failed and we were unable to recover it. 00:39:27.054 [2024-10-13 14:35:30.665541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.054 [2024-10-13 14:35:30.665643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.054 [2024-10-13 14:35:30.665659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.054 [2024-10-13 14:35:30.665666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.054 [2024-10-13 14:35:30.665673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.054 [2024-10-13 14:35:30.665688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.054 qpair failed and we were unable to recover it. 00:39:27.054 [2024-10-13 14:35:30.675568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.054 [2024-10-13 14:35:30.675623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.054 [2024-10-13 14:35:30.675641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.054 [2024-10-13 14:35:30.675648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.054 [2024-10-13 14:35:30.675655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.054 [2024-10-13 14:35:30.675671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.054 qpair failed and we were unable to recover it. 00:39:27.054 [2024-10-13 14:35:30.685565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.054 [2024-10-13 14:35:30.685640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.054 [2024-10-13 14:35:30.685657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.054 [2024-10-13 14:35:30.685664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.054 [2024-10-13 14:35:30.685670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.054 [2024-10-13 14:35:30.685685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.054 qpair failed and we were unable to recover it. 00:39:27.054 [2024-10-13 14:35:30.695614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.054 [2024-10-13 14:35:30.695682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.054 [2024-10-13 14:35:30.695699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.054 [2024-10-13 14:35:30.695706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.054 [2024-10-13 14:35:30.695713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.054 [2024-10-13 14:35:30.695728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.054 qpair failed and we were unable to recover it. 00:39:27.054 [2024-10-13 14:35:30.705580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.054 [2024-10-13 14:35:30.705639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.054 [2024-10-13 14:35:30.705656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.054 [2024-10-13 14:35:30.705663] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.054 [2024-10-13 14:35:30.705669] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.054 [2024-10-13 14:35:30.705686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.054 qpair failed and we were unable to recover it. 00:39:27.054 [2024-10-13 14:35:30.715599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.055 [2024-10-13 14:35:30.715665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.055 [2024-10-13 14:35:30.715682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.055 [2024-10-13 14:35:30.715694] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.055 [2024-10-13 14:35:30.715700] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.055 [2024-10-13 14:35:30.715717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.055 qpair failed and we were unable to recover it. 00:39:27.055 [2024-10-13 14:35:30.725606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.055 [2024-10-13 14:35:30.725668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.055 [2024-10-13 14:35:30.725684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.055 [2024-10-13 14:35:30.725692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.055 [2024-10-13 14:35:30.725698] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.055 [2024-10-13 14:35:30.725713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.055 qpair failed and we were unable to recover it. 00:39:27.055 [2024-10-13 14:35:30.735637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.055 [2024-10-13 14:35:30.735715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.055 [2024-10-13 14:35:30.735732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.055 [2024-10-13 14:35:30.735739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.055 [2024-10-13 14:35:30.735746] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.055 [2024-10-13 14:35:30.735761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.055 qpair failed and we were unable to recover it. 00:39:27.055 [2024-10-13 14:35:30.745606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.055 [2024-10-13 14:35:30.745673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.055 [2024-10-13 14:35:30.745692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.055 [2024-10-13 14:35:30.745699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.055 [2024-10-13 14:35:30.745705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.055 [2024-10-13 14:35:30.745721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.055 qpair failed and we were unable to recover it. 00:39:27.055 [2024-10-13 14:35:30.755581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.055 [2024-10-13 14:35:30.755648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.055 [2024-10-13 14:35:30.755665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.055 [2024-10-13 14:35:30.755672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.055 [2024-10-13 14:35:30.755678] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.055 [2024-10-13 14:35:30.755693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.055 qpair failed and we were unable to recover it. 00:39:27.318 [2024-10-13 14:35:30.765632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.318 [2024-10-13 14:35:30.765698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.318 [2024-10-13 14:35:30.765715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.318 [2024-10-13 14:35:30.765722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.318 [2024-10-13 14:35:30.765728] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.318 [2024-10-13 14:35:30.765743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.318 qpair failed and we were unable to recover it. 00:39:27.318 [2024-10-13 14:35:30.775654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.318 [2024-10-13 14:35:30.775720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.318 [2024-10-13 14:35:30.775736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.318 [2024-10-13 14:35:30.775743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.318 [2024-10-13 14:35:30.775750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.318 [2024-10-13 14:35:30.775765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.318 qpair failed and we were unable to recover it. 00:39:27.318 [2024-10-13 14:35:30.785663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.318 [2024-10-13 14:35:30.785784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.318 [2024-10-13 14:35:30.785818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.318 [2024-10-13 14:35:30.785828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.318 [2024-10-13 14:35:30.785835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.318 [2024-10-13 14:35:30.785857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.318 qpair failed and we were unable to recover it. 00:39:27.318 [2024-10-13 14:35:30.795665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.318 [2024-10-13 14:35:30.795733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.318 [2024-10-13 14:35:30.795767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.318 [2024-10-13 14:35:30.795777] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.318 [2024-10-13 14:35:30.795784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.318 [2024-10-13 14:35:30.795807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.318 qpair failed and we were unable to recover it. 00:39:27.318 [2024-10-13 14:35:30.805633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.318 [2024-10-13 14:35:30.805708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.318 [2024-10-13 14:35:30.805749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.318 [2024-10-13 14:35:30.805759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.318 [2024-10-13 14:35:30.805765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.318 [2024-10-13 14:35:30.805789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.318 qpair failed and we were unable to recover it. 00:39:27.318 [2024-10-13 14:35:30.815567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.318 [2024-10-13 14:35:30.815650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.318 [2024-10-13 14:35:30.815669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.318 [2024-10-13 14:35:30.815677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.318 [2024-10-13 14:35:30.815683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.319 [2024-10-13 14:35:30.815701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.319 qpair failed and we were unable to recover it. 00:39:27.319 [2024-10-13 14:35:30.825630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.319 [2024-10-13 14:35:30.825695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.319 [2024-10-13 14:35:30.825712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.319 [2024-10-13 14:35:30.825719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.319 [2024-10-13 14:35:30.825726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.319 [2024-10-13 14:35:30.825742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.319 qpair failed and we were unable to recover it. 00:39:27.319 [2024-10-13 14:35:30.835640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.319 [2024-10-13 14:35:30.835701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.319 [2024-10-13 14:35:30.835719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.319 [2024-10-13 14:35:30.835726] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.319 [2024-10-13 14:35:30.835732] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.319 [2024-10-13 14:35:30.835748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.319 qpair failed and we were unable to recover it. 00:39:27.319 [2024-10-13 14:35:30.845552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.319 [2024-10-13 14:35:30.845617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.319 [2024-10-13 14:35:30.845633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.319 [2024-10-13 14:35:30.845641] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.319 [2024-10-13 14:35:30.845647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.319 [2024-10-13 14:35:30.845668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.319 qpair failed and we were unable to recover it. 00:39:27.319 [2024-10-13 14:35:30.855633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.319 [2024-10-13 14:35:30.855703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.319 [2024-10-13 14:35:30.855721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.319 [2024-10-13 14:35:30.855732] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.319 [2024-10-13 14:35:30.855739] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.319 [2024-10-13 14:35:30.855755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.319 qpair failed and we were unable to recover it. 00:39:27.319 [2024-10-13 14:35:30.865704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.319 [2024-10-13 14:35:30.865782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.319 [2024-10-13 14:35:30.865800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.319 [2024-10-13 14:35:30.865807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.319 [2024-10-13 14:35:30.865817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.319 [2024-10-13 14:35:30.865836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.319 qpair failed and we were unable to recover it. 00:39:27.319 [2024-10-13 14:35:30.875702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.319 [2024-10-13 14:35:30.875780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.319 [2024-10-13 14:35:30.875797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.319 [2024-10-13 14:35:30.875804] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.319 [2024-10-13 14:35:30.875811] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.319 [2024-10-13 14:35:30.875826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.319 qpair failed and we were unable to recover it. 00:39:27.319 [2024-10-13 14:35:30.885713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.319 [2024-10-13 14:35:30.885816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.319 [2024-10-13 14:35:30.885852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.319 [2024-10-13 14:35:30.885862] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.319 [2024-10-13 14:35:30.885869] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.319 [2024-10-13 14:35:30.885892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.319 qpair failed and we were unable to recover it. 00:39:27.319 [2024-10-13 14:35:30.895723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.319 [2024-10-13 14:35:30.895800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.319 [2024-10-13 14:35:30.895826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.319 [2024-10-13 14:35:30.895834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.319 [2024-10-13 14:35:30.895840] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.319 [2024-10-13 14:35:30.895858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.319 qpair failed and we were unable to recover it. 00:39:27.319 [2024-10-13 14:35:30.905752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.319 [2024-10-13 14:35:30.905817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.319 [2024-10-13 14:35:30.905852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.319 [2024-10-13 14:35:30.905861] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.319 [2024-10-13 14:35:30.905868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.319 [2024-10-13 14:35:30.905892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.319 qpair failed and we were unable to recover it. 00:39:27.319 [2024-10-13 14:35:30.915592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.319 [2024-10-13 14:35:30.915678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.319 [2024-10-13 14:35:30.915698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.319 [2024-10-13 14:35:30.915706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.319 [2024-10-13 14:35:30.915712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.319 [2024-10-13 14:35:30.915730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.319 qpair failed and we were unable to recover it. 00:39:27.319 [2024-10-13 14:35:30.925615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.319 [2024-10-13 14:35:30.925696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.319 [2024-10-13 14:35:30.925714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.319 [2024-10-13 14:35:30.925721] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.319 [2024-10-13 14:35:30.925727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.319 [2024-10-13 14:35:30.925743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.319 qpair failed and we were unable to recover it. 00:39:27.319 [2024-10-13 14:35:30.935783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.319 [2024-10-13 14:35:30.935858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.319 [2024-10-13 14:35:30.935875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.319 [2024-10-13 14:35:30.935882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.319 [2024-10-13 14:35:30.935889] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.319 [2024-10-13 14:35:30.935911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.319 qpair failed and we were unable to recover it. 00:39:27.319 [2024-10-13 14:35:30.945769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.319 [2024-10-13 14:35:30.945891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.320 [2024-10-13 14:35:30.945927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.320 [2024-10-13 14:35:30.945938] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.320 [2024-10-13 14:35:30.945946] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.320 [2024-10-13 14:35:30.945968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.320 qpair failed and we were unable to recover it. 00:39:27.320 [2024-10-13 14:35:30.955749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.320 [2024-10-13 14:35:30.955808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.320 [2024-10-13 14:35:30.955830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.320 [2024-10-13 14:35:30.955838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.320 [2024-10-13 14:35:30.955845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.320 [2024-10-13 14:35:30.955862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.320 qpair failed and we were unable to recover it. 00:39:27.320 [2024-10-13 14:35:30.965833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.320 [2024-10-13 14:35:30.965915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.320 [2024-10-13 14:35:30.965950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.320 [2024-10-13 14:35:30.965959] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.320 [2024-10-13 14:35:30.965967] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.320 [2024-10-13 14:35:30.965990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.320 qpair failed and we were unable to recover it. 00:39:27.320 [2024-10-13 14:35:30.975770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.320 [2024-10-13 14:35:30.975848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.320 [2024-10-13 14:35:30.975868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.320 [2024-10-13 14:35:30.975875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.320 [2024-10-13 14:35:30.975882] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.320 [2024-10-13 14:35:30.975898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.320 qpair failed and we were unable to recover it. 00:39:27.320 [2024-10-13 14:35:30.985829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.320 [2024-10-13 14:35:30.985894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.320 [2024-10-13 14:35:30.985918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.320 [2024-10-13 14:35:30.985925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.320 [2024-10-13 14:35:30.985932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.320 [2024-10-13 14:35:30.985948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.320 qpair failed and we were unable to recover it. 00:39:27.320 [2024-10-13 14:35:30.995652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.320 [2024-10-13 14:35:30.995720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.320 [2024-10-13 14:35:30.995740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.320 [2024-10-13 14:35:30.995747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.320 [2024-10-13 14:35:30.995754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.320 [2024-10-13 14:35:30.995770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.320 qpair failed and we were unable to recover it. 00:39:27.320 [2024-10-13 14:35:31.005779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.320 [2024-10-13 14:35:31.005849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.320 [2024-10-13 14:35:31.005865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.320 [2024-10-13 14:35:31.005873] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.320 [2024-10-13 14:35:31.005879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.320 [2024-10-13 14:35:31.005894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.320 qpair failed and we were unable to recover it. 00:39:27.320 [2024-10-13 14:35:31.015809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.320 [2024-10-13 14:35:31.015872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.320 [2024-10-13 14:35:31.015889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.320 [2024-10-13 14:35:31.015897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.320 [2024-10-13 14:35:31.015903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.320 [2024-10-13 14:35:31.015919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.320 qpair failed and we were unable to recover it. 00:39:27.583 [2024-10-13 14:35:31.025782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.583 [2024-10-13 14:35:31.025885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.583 [2024-10-13 14:35:31.025904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.583 [2024-10-13 14:35:31.025912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.584 [2024-10-13 14:35:31.025925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.584 [2024-10-13 14:35:31.025943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.584 qpair failed and we were unable to recover it. 00:39:27.584 [2024-10-13 14:35:31.035729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.584 [2024-10-13 14:35:31.035799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.584 [2024-10-13 14:35:31.035817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.584 [2024-10-13 14:35:31.035824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.584 [2024-10-13 14:35:31.035830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.584 [2024-10-13 14:35:31.035846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.584 qpair failed and we were unable to recover it. 00:39:27.584 [2024-10-13 14:35:31.045726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.584 [2024-10-13 14:35:31.045788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.584 [2024-10-13 14:35:31.045805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.584 [2024-10-13 14:35:31.045812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.584 [2024-10-13 14:35:31.045818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.584 [2024-10-13 14:35:31.045834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.584 qpair failed and we were unable to recover it. 00:39:27.584 [2024-10-13 14:35:31.055734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.584 [2024-10-13 14:35:31.055811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.584 [2024-10-13 14:35:31.055832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.584 [2024-10-13 14:35:31.055841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.584 [2024-10-13 14:35:31.055853] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.584 [2024-10-13 14:35:31.055870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.584 qpair failed and we were unable to recover it. 00:39:27.584 [2024-10-13 14:35:31.065797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.584 [2024-10-13 14:35:31.065853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.584 [2024-10-13 14:35:31.065871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.584 [2024-10-13 14:35:31.065878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.584 [2024-10-13 14:35:31.065884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.584 [2024-10-13 14:35:31.065900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.584 qpair failed and we were unable to recover it. 00:39:27.584 [2024-10-13 14:35:31.075818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.584 [2024-10-13 14:35:31.075895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.584 [2024-10-13 14:35:31.075913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.584 [2024-10-13 14:35:31.075921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.584 [2024-10-13 14:35:31.075927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.584 [2024-10-13 14:35:31.075942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.584 qpair failed and we were unable to recover it. 00:39:27.584 [2024-10-13 14:35:31.085847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.584 [2024-10-13 14:35:31.085917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.584 [2024-10-13 14:35:31.085934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.584 [2024-10-13 14:35:31.085941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.584 [2024-10-13 14:35:31.085948] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.584 [2024-10-13 14:35:31.085963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.584 qpair failed and we were unable to recover it. 00:39:27.584 [2024-10-13 14:35:31.095886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.584 [2024-10-13 14:35:31.095960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.584 [2024-10-13 14:35:31.095977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.584 [2024-10-13 14:35:31.095984] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.584 [2024-10-13 14:35:31.095991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.584 [2024-10-13 14:35:31.096009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.584 qpair failed and we were unable to recover it. 00:39:27.584 [2024-10-13 14:35:31.105836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.584 [2024-10-13 14:35:31.105889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.584 [2024-10-13 14:35:31.105906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.584 [2024-10-13 14:35:31.105913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.584 [2024-10-13 14:35:31.105919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.584 [2024-10-13 14:35:31.105935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.584 qpair failed and we were unable to recover it. 00:39:27.584 [2024-10-13 14:35:31.115716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.584 [2024-10-13 14:35:31.115806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.584 [2024-10-13 14:35:31.115823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.584 [2024-10-13 14:35:31.115831] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.584 [2024-10-13 14:35:31.115843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.584 [2024-10-13 14:35:31.115859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.584 qpair failed and we were unable to recover it. 00:39:27.584 [2024-10-13 14:35:31.125858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.584 [2024-10-13 14:35:31.125922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.584 [2024-10-13 14:35:31.125938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.584 [2024-10-13 14:35:31.125945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.584 [2024-10-13 14:35:31.125952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.584 [2024-10-13 14:35:31.125967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.584 qpair failed and we were unable to recover it. 00:39:27.584 [2024-10-13 14:35:31.135897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.584 [2024-10-13 14:35:31.135967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.584 [2024-10-13 14:35:31.135983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.584 [2024-10-13 14:35:31.135990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.584 [2024-10-13 14:35:31.135996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.584 [2024-10-13 14:35:31.136012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.584 qpair failed and we were unable to recover it. 00:39:27.584 [2024-10-13 14:35:31.145887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.584 [2024-10-13 14:35:31.145948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.584 [2024-10-13 14:35:31.145964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.584 [2024-10-13 14:35:31.145972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.584 [2024-10-13 14:35:31.145978] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.584 [2024-10-13 14:35:31.145993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.584 qpair failed and we were unable to recover it. 00:39:27.584 [2024-10-13 14:35:31.155865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.584 [2024-10-13 14:35:31.155951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.584 [2024-10-13 14:35:31.155968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.584 [2024-10-13 14:35:31.155975] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.584 [2024-10-13 14:35:31.155981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.584 [2024-10-13 14:35:31.155997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.584 qpair failed and we were unable to recover it. 00:39:27.584 [2024-10-13 14:35:31.165902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.584 [2024-10-13 14:35:31.165969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.584 [2024-10-13 14:35:31.165986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.584 [2024-10-13 14:35:31.165993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.585 [2024-10-13 14:35:31.166000] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.585 [2024-10-13 14:35:31.166015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.585 qpair failed and we were unable to recover it. 00:39:27.585 [2024-10-13 14:35:31.175941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.585 [2024-10-13 14:35:31.176004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.585 [2024-10-13 14:35:31.176020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.585 [2024-10-13 14:35:31.176027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.585 [2024-10-13 14:35:31.176033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.585 [2024-10-13 14:35:31.176048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.585 qpair failed and we were unable to recover it. 00:39:27.585 [2024-10-13 14:35:31.185922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.585 [2024-10-13 14:35:31.185981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.585 [2024-10-13 14:35:31.185997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.585 [2024-10-13 14:35:31.186004] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.585 [2024-10-13 14:35:31.186010] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.585 [2024-10-13 14:35:31.186026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.585 qpair failed and we were unable to recover it. 00:39:27.585 [2024-10-13 14:35:31.195898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.585 [2024-10-13 14:35:31.195961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.585 [2024-10-13 14:35:31.195979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.585 [2024-10-13 14:35:31.195986] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.585 [2024-10-13 14:35:31.195993] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.585 [2024-10-13 14:35:31.196009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.585 qpair failed and we were unable to recover it. 00:39:27.585 [2024-10-13 14:35:31.205984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.585 [2024-10-13 14:35:31.206053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.585 [2024-10-13 14:35:31.206076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.585 [2024-10-13 14:35:31.206088] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.585 [2024-10-13 14:35:31.206094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.585 [2024-10-13 14:35:31.206111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.585 qpair failed and we were unable to recover it. 00:39:27.585 [2024-10-13 14:35:31.215969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.585 [2024-10-13 14:35:31.216042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.585 [2024-10-13 14:35:31.216060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.585 [2024-10-13 14:35:31.216073] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.585 [2024-10-13 14:35:31.216080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.585 [2024-10-13 14:35:31.216096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.585 qpair failed and we were unable to recover it. 00:39:27.585 [2024-10-13 14:35:31.225807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.585 [2024-10-13 14:35:31.225869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.585 [2024-10-13 14:35:31.225887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.585 [2024-10-13 14:35:31.225895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.585 [2024-10-13 14:35:31.225901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.585 [2024-10-13 14:35:31.225922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.585 qpair failed and we were unable to recover it. 00:39:27.585 [2024-10-13 14:35:31.235925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.585 [2024-10-13 14:35:31.235988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.585 [2024-10-13 14:35:31.236005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.585 [2024-10-13 14:35:31.236012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.585 [2024-10-13 14:35:31.236019] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.585 [2024-10-13 14:35:31.236036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.585 qpair failed and we were unable to recover it. 00:39:27.585 [2024-10-13 14:35:31.245998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.585 [2024-10-13 14:35:31.246075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.585 [2024-10-13 14:35:31.246092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.585 [2024-10-13 14:35:31.246099] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.585 [2024-10-13 14:35:31.246105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.585 [2024-10-13 14:35:31.246121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.585 qpair failed and we were unable to recover it. 00:39:27.585 [2024-10-13 14:35:31.255993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.585 [2024-10-13 14:35:31.256082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.585 [2024-10-13 14:35:31.256100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.585 [2024-10-13 14:35:31.256107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.585 [2024-10-13 14:35:31.256113] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.585 [2024-10-13 14:35:31.256130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.585 qpair failed and we were unable to recover it. 00:39:27.585 [2024-10-13 14:35:31.266002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.585 [2024-10-13 14:35:31.266071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.585 [2024-10-13 14:35:31.266090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.585 [2024-10-13 14:35:31.266098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.585 [2024-10-13 14:35:31.266104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.585 [2024-10-13 14:35:31.266120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.585 qpair failed and we were unable to recover it. 00:39:27.585 [2024-10-13 14:35:31.275974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.585 [2024-10-13 14:35:31.276035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.585 [2024-10-13 14:35:31.276052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.585 [2024-10-13 14:35:31.276060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.585 [2024-10-13 14:35:31.276074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.585 [2024-10-13 14:35:31.276091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.585 qpair failed and we were unable to recover it. 00:39:27.585 [2024-10-13 14:35:31.285996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.585 [2024-10-13 14:35:31.286111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.585 [2024-10-13 14:35:31.286132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.585 [2024-10-13 14:35:31.286143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.585 [2024-10-13 14:35:31.286152] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.585 [2024-10-13 14:35:31.286171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.585 qpair failed and we were unable to recover it. 00:39:27.849 [2024-10-13 14:35:31.295959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.849 [2024-10-13 14:35:31.296034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.849 [2024-10-13 14:35:31.296061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.849 [2024-10-13 14:35:31.296076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.849 [2024-10-13 14:35:31.296083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.849 [2024-10-13 14:35:31.296099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-10-13 14:35:31.306000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.849 [2024-10-13 14:35:31.306069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.849 [2024-10-13 14:35:31.306086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.849 [2024-10-13 14:35:31.306093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.849 [2024-10-13 14:35:31.306099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.849 [2024-10-13 14:35:31.306115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-10-13 14:35:31.316023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.849 [2024-10-13 14:35:31.316091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.849 [2024-10-13 14:35:31.316108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.849 [2024-10-13 14:35:31.316116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.849 [2024-10-13 14:35:31.316123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.849 [2024-10-13 14:35:31.316139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-10-13 14:35:31.325979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.849 [2024-10-13 14:35:31.326045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.849 [2024-10-13 14:35:31.326068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.849 [2024-10-13 14:35:31.326075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.849 [2024-10-13 14:35:31.326082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.849 [2024-10-13 14:35:31.326097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-10-13 14:35:31.336090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.849 [2024-10-13 14:35:31.336163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.849 [2024-10-13 14:35:31.336180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.849 [2024-10-13 14:35:31.336187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.849 [2024-10-13 14:35:31.336193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.849 [2024-10-13 14:35:31.336209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.849 [2024-10-13 14:35:31.345999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.849 [2024-10-13 14:35:31.346075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.849 [2024-10-13 14:35:31.346094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.849 [2024-10-13 14:35:31.346101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.849 [2024-10-13 14:35:31.346107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.849 [2024-10-13 14:35:31.346123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.849 qpair failed and we were unable to recover it. 00:39:27.850 [2024-10-13 14:35:31.356045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.850 [2024-10-13 14:35:31.356115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.850 [2024-10-13 14:35:31.356132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.850 [2024-10-13 14:35:31.356139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.850 [2024-10-13 14:35:31.356146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.850 [2024-10-13 14:35:31.356161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-10-13 14:35:31.366084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.850 [2024-10-13 14:35:31.366152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.850 [2024-10-13 14:35:31.366168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.850 [2024-10-13 14:35:31.366177] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.850 [2024-10-13 14:35:31.366184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.850 [2024-10-13 14:35:31.366200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-10-13 14:35:31.376073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.850 [2024-10-13 14:35:31.376184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.850 [2024-10-13 14:35:31.376200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.850 [2024-10-13 14:35:31.376208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.850 [2024-10-13 14:35:31.376214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.850 [2024-10-13 14:35:31.376230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-10-13 14:35:31.386061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.850 [2024-10-13 14:35:31.386133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.850 [2024-10-13 14:35:31.386159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.850 [2024-10-13 14:35:31.386166] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.850 [2024-10-13 14:35:31.386172] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.850 [2024-10-13 14:35:31.386189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-10-13 14:35:31.395948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.850 [2024-10-13 14:35:31.396020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.850 [2024-10-13 14:35:31.396037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.850 [2024-10-13 14:35:31.396044] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.850 [2024-10-13 14:35:31.396050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.850 [2024-10-13 14:35:31.396071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-10-13 14:35:31.406046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.850 [2024-10-13 14:35:31.406118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.850 [2024-10-13 14:35:31.406135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.850 [2024-10-13 14:35:31.406142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.850 [2024-10-13 14:35:31.406149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.850 [2024-10-13 14:35:31.406165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-10-13 14:35:31.416112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.850 [2024-10-13 14:35:31.416177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.850 [2024-10-13 14:35:31.416194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.850 [2024-10-13 14:35:31.416201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.850 [2024-10-13 14:35:31.416207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.850 [2024-10-13 14:35:31.416222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-10-13 14:35:31.426077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.850 [2024-10-13 14:35:31.426142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.850 [2024-10-13 14:35:31.426158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.850 [2024-10-13 14:35:31.426165] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.850 [2024-10-13 14:35:31.426172] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.850 [2024-10-13 14:35:31.426193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-10-13 14:35:31.436106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.850 [2024-10-13 14:35:31.436164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.850 [2024-10-13 14:35:31.436180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.850 [2024-10-13 14:35:31.436187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.850 [2024-10-13 14:35:31.436193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.850 [2024-10-13 14:35:31.436208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-10-13 14:35:31.446079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.850 [2024-10-13 14:35:31.446147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.850 [2024-10-13 14:35:31.446164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.850 [2024-10-13 14:35:31.446171] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.850 [2024-10-13 14:35:31.446177] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.850 [2024-10-13 14:35:31.446193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-10-13 14:35:31.456140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.850 [2024-10-13 14:35:31.456222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.850 [2024-10-13 14:35:31.456239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.850 [2024-10-13 14:35:31.456246] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.850 [2024-10-13 14:35:31.456253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.850 [2024-10-13 14:35:31.456268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-10-13 14:35:31.466119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.850 [2024-10-13 14:35:31.466181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.850 [2024-10-13 14:35:31.466198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.850 [2024-10-13 14:35:31.466205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.850 [2024-10-13 14:35:31.466211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.850 [2024-10-13 14:35:31.466227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-10-13 14:35:31.476130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.850 [2024-10-13 14:35:31.476219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.850 [2024-10-13 14:35:31.476240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.850 [2024-10-13 14:35:31.476247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.850 [2024-10-13 14:35:31.476254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.850 [2024-10-13 14:35:31.476269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.850 [2024-10-13 14:35:31.486136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.850 [2024-10-13 14:35:31.486201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.850 [2024-10-13 14:35:31.486217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.850 [2024-10-13 14:35:31.486225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.850 [2024-10-13 14:35:31.486231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.850 [2024-10-13 14:35:31.486247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.850 qpair failed and we were unable to recover it. 00:39:27.851 [2024-10-13 14:35:31.496124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.851 [2024-10-13 14:35:31.496193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.851 [2024-10-13 14:35:31.496210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.851 [2024-10-13 14:35:31.496217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.851 [2024-10-13 14:35:31.496223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.851 [2024-10-13 14:35:31.496239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-10-13 14:35:31.506133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.851 [2024-10-13 14:35:31.506189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.851 [2024-10-13 14:35:31.506207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.851 [2024-10-13 14:35:31.506214] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.851 [2024-10-13 14:35:31.506223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.851 [2024-10-13 14:35:31.506239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-10-13 14:35:31.516148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.851 [2024-10-13 14:35:31.516205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.851 [2024-10-13 14:35:31.516221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.851 [2024-10-13 14:35:31.516228] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.851 [2024-10-13 14:35:31.516239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.851 [2024-10-13 14:35:31.516255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-10-13 14:35:31.526155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.851 [2024-10-13 14:35:31.526257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.851 [2024-10-13 14:35:31.526274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.851 [2024-10-13 14:35:31.526281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.851 [2024-10-13 14:35:31.526288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.851 [2024-10-13 14:35:31.526303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-10-13 14:35:31.536199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.851 [2024-10-13 14:35:31.536278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.851 [2024-10-13 14:35:31.536295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.851 [2024-10-13 14:35:31.536302] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.851 [2024-10-13 14:35:31.536309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.851 [2024-10-13 14:35:31.536325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.851 qpair failed and we were unable to recover it. 00:39:27.851 [2024-10-13 14:35:31.546088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:27.851 [2024-10-13 14:35:31.546151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:27.851 [2024-10-13 14:35:31.546169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:27.851 [2024-10-13 14:35:31.546176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:27.851 [2024-10-13 14:35:31.546183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:27.851 [2024-10-13 14:35:31.546198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:27.851 qpair failed and we were unable to recover it. 00:39:28.114 [2024-10-13 14:35:31.556171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.114 [2024-10-13 14:35:31.556232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.114 [2024-10-13 14:35:31.556251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.114 [2024-10-13 14:35:31.556258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.114 [2024-10-13 14:35:31.556264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.114 [2024-10-13 14:35:31.556280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.114 qpair failed and we were unable to recover it. 00:39:28.114 [2024-10-13 14:35:31.566197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.114 [2024-10-13 14:35:31.566270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.114 [2024-10-13 14:35:31.566287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.114 [2024-10-13 14:35:31.566294] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.114 [2024-10-13 14:35:31.566301] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.114 [2024-10-13 14:35:31.566316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.114 qpair failed and we were unable to recover it. 00:39:28.114 [2024-10-13 14:35:31.576225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.114 [2024-10-13 14:35:31.576297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.114 [2024-10-13 14:35:31.576314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.114 [2024-10-13 14:35:31.576321] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.114 [2024-10-13 14:35:31.576327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.114 [2024-10-13 14:35:31.576343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.114 qpair failed and we were unable to recover it. 00:39:28.114 [2024-10-13 14:35:31.586070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.114 [2024-10-13 14:35:31.586133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.114 [2024-10-13 14:35:31.586149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.114 [2024-10-13 14:35:31.586156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.114 [2024-10-13 14:35:31.586163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.115 [2024-10-13 14:35:31.586179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.115 qpair failed and we were unable to recover it. 00:39:28.115 [2024-10-13 14:35:31.596221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.115 [2024-10-13 14:35:31.596286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.115 [2024-10-13 14:35:31.596302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.115 [2024-10-13 14:35:31.596309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.115 [2024-10-13 14:35:31.596315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.115 [2024-10-13 14:35:31.596331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.115 qpair failed and we were unable to recover it. 00:39:28.115 [2024-10-13 14:35:31.606159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.115 [2024-10-13 14:35:31.606229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.115 [2024-10-13 14:35:31.606248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.115 [2024-10-13 14:35:31.606256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.115 [2024-10-13 14:35:31.606277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.115 [2024-10-13 14:35:31.606294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.115 qpair failed and we were unable to recover it. 00:39:28.115 [2024-10-13 14:35:31.616274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.115 [2024-10-13 14:35:31.616344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.115 [2024-10-13 14:35:31.616360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.115 [2024-10-13 14:35:31.616368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.115 [2024-10-13 14:35:31.616374] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.115 [2024-10-13 14:35:31.616390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.115 qpair failed and we were unable to recover it. 00:39:28.115 [2024-10-13 14:35:31.626245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.115 [2024-10-13 14:35:31.626310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.115 [2024-10-13 14:35:31.626326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.115 [2024-10-13 14:35:31.626334] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.115 [2024-10-13 14:35:31.626340] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.115 [2024-10-13 14:35:31.626355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.115 qpair failed and we were unable to recover it. 00:39:28.115 [2024-10-13 14:35:31.636245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.115 [2024-10-13 14:35:31.636318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.115 [2024-10-13 14:35:31.636334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.115 [2024-10-13 14:35:31.636342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.115 [2024-10-13 14:35:31.636348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.115 [2024-10-13 14:35:31.636363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.115 qpair failed and we were unable to recover it. 00:39:28.115 [2024-10-13 14:35:31.646262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.115 [2024-10-13 14:35:31.646328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.115 [2024-10-13 14:35:31.646345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.115 [2024-10-13 14:35:31.646352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.115 [2024-10-13 14:35:31.646358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.115 [2024-10-13 14:35:31.646373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.115 qpair failed and we were unable to recover it. 00:39:28.115 [2024-10-13 14:35:31.656175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.115 [2024-10-13 14:35:31.656269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.115 [2024-10-13 14:35:31.656287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.115 [2024-10-13 14:35:31.656295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.115 [2024-10-13 14:35:31.656301] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.115 [2024-10-13 14:35:31.656317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.115 qpair failed and we were unable to recover it. 00:39:28.115 [2024-10-13 14:35:31.666284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.115 [2024-10-13 14:35:31.666364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.115 [2024-10-13 14:35:31.666380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.115 [2024-10-13 14:35:31.666388] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.115 [2024-10-13 14:35:31.666394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.115 [2024-10-13 14:35:31.666409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.115 qpair failed and we were unable to recover it. 00:39:28.115 [2024-10-13 14:35:31.676309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.115 [2024-10-13 14:35:31.676370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.115 [2024-10-13 14:35:31.676388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.115 [2024-10-13 14:35:31.676395] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.115 [2024-10-13 14:35:31.676401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.115 [2024-10-13 14:35:31.676416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.115 qpair failed and we were unable to recover it. 00:39:28.115 [2024-10-13 14:35:31.686299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.115 [2024-10-13 14:35:31.686368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.115 [2024-10-13 14:35:31.686385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.115 [2024-10-13 14:35:31.686393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.115 [2024-10-13 14:35:31.686399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.115 [2024-10-13 14:35:31.686414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.115 qpair failed and we were unable to recover it. 00:39:28.115 [2024-10-13 14:35:31.696322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.115 [2024-10-13 14:35:31.696399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.115 [2024-10-13 14:35:31.696415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.115 [2024-10-13 14:35:31.696428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.115 [2024-10-13 14:35:31.696434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.115 [2024-10-13 14:35:31.696450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.115 qpair failed and we were unable to recover it. 00:39:28.115 [2024-10-13 14:35:31.706186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.115 [2024-10-13 14:35:31.706255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.115 [2024-10-13 14:35:31.706273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.115 [2024-10-13 14:35:31.706280] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.115 [2024-10-13 14:35:31.706288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.115 [2024-10-13 14:35:31.706309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.115 qpair failed and we were unable to recover it. 00:39:28.115 [2024-10-13 14:35:31.716289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.115 [2024-10-13 14:35:31.716351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.115 [2024-10-13 14:35:31.716368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.115 [2024-10-13 14:35:31.716376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.115 [2024-10-13 14:35:31.716382] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.115 [2024-10-13 14:35:31.716397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.115 qpair failed and we were unable to recover it. 00:39:28.115 [2024-10-13 14:35:31.726200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.115 [2024-10-13 14:35:31.726273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.115 [2024-10-13 14:35:31.726291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.115 [2024-10-13 14:35:31.726298] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.115 [2024-10-13 14:35:31.726304] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.115 [2024-10-13 14:35:31.726320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.115 qpair failed and we were unable to recover it. 00:39:28.115 [2024-10-13 14:35:31.736324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.116 [2024-10-13 14:35:31.736387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.116 [2024-10-13 14:35:31.736403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.116 [2024-10-13 14:35:31.736411] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.116 [2024-10-13 14:35:31.736417] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.116 [2024-10-13 14:35:31.736433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.116 qpair failed and we were unable to recover it. 00:39:28.116 [2024-10-13 14:35:31.746306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.116 [2024-10-13 14:35:31.746372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.116 [2024-10-13 14:35:31.746389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.116 [2024-10-13 14:35:31.746397] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.116 [2024-10-13 14:35:31.746403] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.116 [2024-10-13 14:35:31.746418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.116 qpair failed and we were unable to recover it. 00:39:28.116 [2024-10-13 14:35:31.756330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.116 [2024-10-13 14:35:31.756398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.116 [2024-10-13 14:35:31.756415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.116 [2024-10-13 14:35:31.756423] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.116 [2024-10-13 14:35:31.756429] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.116 [2024-10-13 14:35:31.756445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.116 qpair failed and we were unable to recover it. 00:39:28.116 [2024-10-13 14:35:31.766333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.116 [2024-10-13 14:35:31.766398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.116 [2024-10-13 14:35:31.766415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.116 [2024-10-13 14:35:31.766422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.116 [2024-10-13 14:35:31.766429] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.116 [2024-10-13 14:35:31.766444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.116 qpair failed and we were unable to recover it. 00:39:28.116 [2024-10-13 14:35:31.776352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.116 [2024-10-13 14:35:31.776420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.116 [2024-10-13 14:35:31.776437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.116 [2024-10-13 14:35:31.776444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.116 [2024-10-13 14:35:31.776450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.116 [2024-10-13 14:35:31.776466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.116 qpair failed and we were unable to recover it. 00:39:28.116 [2024-10-13 14:35:31.786387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.116 [2024-10-13 14:35:31.786493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.116 [2024-10-13 14:35:31.786510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.116 [2024-10-13 14:35:31.786522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.116 [2024-10-13 14:35:31.786529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.116 [2024-10-13 14:35:31.786545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.116 qpair failed and we were unable to recover it. 00:39:28.116 [2024-10-13 14:35:31.796361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.116 [2024-10-13 14:35:31.796417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.116 [2024-10-13 14:35:31.796434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.116 [2024-10-13 14:35:31.796442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.116 [2024-10-13 14:35:31.796448] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.116 [2024-10-13 14:35:31.796463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.116 qpair failed and we were unable to recover it. 00:39:28.116 [2024-10-13 14:35:31.806372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.116 [2024-10-13 14:35:31.806446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.116 [2024-10-13 14:35:31.806462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.116 [2024-10-13 14:35:31.806469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.116 [2024-10-13 14:35:31.806476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.116 [2024-10-13 14:35:31.806490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.116 qpair failed and we were unable to recover it. 00:39:28.116 [2024-10-13 14:35:31.816391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.116 [2024-10-13 14:35:31.816461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.116 [2024-10-13 14:35:31.816477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.116 [2024-10-13 14:35:31.816484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.116 [2024-10-13 14:35:31.816490] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.116 [2024-10-13 14:35:31.816505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.116 qpair failed and we were unable to recover it. 00:39:28.378 [2024-10-13 14:35:31.826312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.378 [2024-10-13 14:35:31.826363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.378 [2024-10-13 14:35:31.826378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.378 [2024-10-13 14:35:31.826385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.378 [2024-10-13 14:35:31.826392] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.378 [2024-10-13 14:35:31.826407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.378 qpair failed and we were unable to recover it. 00:39:28.378 [2024-10-13 14:35:31.836235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.378 [2024-10-13 14:35:31.836287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.378 [2024-10-13 14:35:31.836304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.378 [2024-10-13 14:35:31.836312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.378 [2024-10-13 14:35:31.836318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.378 [2024-10-13 14:35:31.836334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.378 qpair failed and we were unable to recover it. 00:39:28.378 [2024-10-13 14:35:31.846368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.378 [2024-10-13 14:35:31.846427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.378 [2024-10-13 14:35:31.846443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.379 [2024-10-13 14:35:31.846453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.379 [2024-10-13 14:35:31.846460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.379 [2024-10-13 14:35:31.846478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.379 qpair failed and we were unable to recover it. 00:39:28.379 [2024-10-13 14:35:31.856403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.379 [2024-10-13 14:35:31.856477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.379 [2024-10-13 14:35:31.856494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.379 [2024-10-13 14:35:31.856501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.379 [2024-10-13 14:35:31.856508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.379 [2024-10-13 14:35:31.856523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.379 qpair failed and we were unable to recover it. 00:39:28.379 [2024-10-13 14:35:31.866209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.379 [2024-10-13 14:35:31.866257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.379 [2024-10-13 14:35:31.866273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.379 [2024-10-13 14:35:31.866280] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.379 [2024-10-13 14:35:31.866286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.379 [2024-10-13 14:35:31.866301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.379 qpair failed and we were unable to recover it. 00:39:28.379 [2024-10-13 14:35:31.876376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.379 [2024-10-13 14:35:31.876424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.379 [2024-10-13 14:35:31.876444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.379 [2024-10-13 14:35:31.876451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.379 [2024-10-13 14:35:31.876457] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.379 [2024-10-13 14:35:31.876472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.379 qpair failed and we were unable to recover it. 00:39:28.379 [2024-10-13 14:35:31.886337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.379 [2024-10-13 14:35:31.886390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.379 [2024-10-13 14:35:31.886404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.379 [2024-10-13 14:35:31.886411] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.379 [2024-10-13 14:35:31.886418] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.379 [2024-10-13 14:35:31.886432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.379 qpair failed and we were unable to recover it. 00:39:28.379 [2024-10-13 14:35:31.896396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.379 [2024-10-13 14:35:31.896457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.379 [2024-10-13 14:35:31.896472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.379 [2024-10-13 14:35:31.896479] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.379 [2024-10-13 14:35:31.896486] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.379 [2024-10-13 14:35:31.896500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.379 qpair failed and we were unable to recover it. 00:39:28.379 [2024-10-13 14:35:31.906187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.379 [2024-10-13 14:35:31.906238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.379 [2024-10-13 14:35:31.906252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.379 [2024-10-13 14:35:31.906259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.379 [2024-10-13 14:35:31.906265] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.379 [2024-10-13 14:35:31.906279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.379 qpair failed and we were unable to recover it. 00:39:28.379 [2024-10-13 14:35:31.916319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.379 [2024-10-13 14:35:31.916369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.379 [2024-10-13 14:35:31.916383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.379 [2024-10-13 14:35:31.916390] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.379 [2024-10-13 14:35:31.916396] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.379 [2024-10-13 14:35:31.916414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.379 qpair failed and we were unable to recover it. 00:39:28.379 [2024-10-13 14:35:31.926312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.379 [2024-10-13 14:35:31.926371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.379 [2024-10-13 14:35:31.926384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.379 [2024-10-13 14:35:31.926391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.379 [2024-10-13 14:35:31.926398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.379 [2024-10-13 14:35:31.926412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.379 qpair failed and we were unable to recover it. 00:39:28.379 [2024-10-13 14:35:31.936261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.379 [2024-10-13 14:35:31.936327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.379 [2024-10-13 14:35:31.936342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.379 [2024-10-13 14:35:31.936349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.379 [2024-10-13 14:35:31.936355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.379 [2024-10-13 14:35:31.936370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.379 qpair failed and we were unable to recover it. 00:39:28.379 [2024-10-13 14:35:31.946295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.379 [2024-10-13 14:35:31.946340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.379 [2024-10-13 14:35:31.946354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.379 [2024-10-13 14:35:31.946361] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.379 [2024-10-13 14:35:31.946367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.379 [2024-10-13 14:35:31.946381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.379 qpair failed and we were unable to recover it. 00:39:28.379 [2024-10-13 14:35:31.956404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.379 [2024-10-13 14:35:31.956457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.379 [2024-10-13 14:35:31.956471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.379 [2024-10-13 14:35:31.956478] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.379 [2024-10-13 14:35:31.956484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.379 [2024-10-13 14:35:31.956497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.379 qpair failed and we were unable to recover it. 00:39:28.379 [2024-10-13 14:35:31.966356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.379 [2024-10-13 14:35:31.966405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.379 [2024-10-13 14:35:31.966422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.379 [2024-10-13 14:35:31.966429] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.379 [2024-10-13 14:35:31.966435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.379 [2024-10-13 14:35:31.966449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.379 qpair failed and we were unable to recover it. 00:39:28.379 [2024-10-13 14:35:31.976289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.379 [2024-10-13 14:35:31.976388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.379 [2024-10-13 14:35:31.976401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.379 [2024-10-13 14:35:31.976408] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.379 [2024-10-13 14:35:31.976414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.379 [2024-10-13 14:35:31.976428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.379 qpair failed and we were unable to recover it. 00:39:28.379 [2024-10-13 14:35:31.986338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.379 [2024-10-13 14:35:31.986411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.379 [2024-10-13 14:35:31.986424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.379 [2024-10-13 14:35:31.986431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.380 [2024-10-13 14:35:31.986437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.380 [2024-10-13 14:35:31.986450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.380 qpair failed and we were unable to recover it. 00:39:28.380 [2024-10-13 14:35:31.996365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.380 [2024-10-13 14:35:31.996411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.380 [2024-10-13 14:35:31.996424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.380 [2024-10-13 14:35:31.996431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.380 [2024-10-13 14:35:31.996437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.380 [2024-10-13 14:35:31.996451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.380 qpair failed and we were unable to recover it. 00:39:28.380 [2024-10-13 14:35:32.006369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.380 [2024-10-13 14:35:32.006461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.380 [2024-10-13 14:35:32.006474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.380 [2024-10-13 14:35:32.006481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.380 [2024-10-13 14:35:32.006487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.380 [2024-10-13 14:35:32.006504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.380 qpair failed and we were unable to recover it. 00:39:28.380 [2024-10-13 14:35:32.016416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.380 [2024-10-13 14:35:32.016464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.380 [2024-10-13 14:35:32.016477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.380 [2024-10-13 14:35:32.016483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.380 [2024-10-13 14:35:32.016489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.380 [2024-10-13 14:35:32.016503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.380 qpair failed and we were unable to recover it. 00:39:28.380 [2024-10-13 14:35:32.026380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.380 [2024-10-13 14:35:32.026421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.380 [2024-10-13 14:35:32.026435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.380 [2024-10-13 14:35:32.026442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.380 [2024-10-13 14:35:32.026448] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.380 [2024-10-13 14:35:32.026462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.380 qpair failed and we were unable to recover it. 00:39:28.380 [2024-10-13 14:35:32.036340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.380 [2024-10-13 14:35:32.036433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.380 [2024-10-13 14:35:32.036446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.380 [2024-10-13 14:35:32.036453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.380 [2024-10-13 14:35:32.036459] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.380 [2024-10-13 14:35:32.036473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.380 qpair failed and we were unable to recover it. 00:39:28.380 [2024-10-13 14:35:32.046373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.380 [2024-10-13 14:35:32.046425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.380 [2024-10-13 14:35:32.046437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.380 [2024-10-13 14:35:32.046444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.380 [2024-10-13 14:35:32.046450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.380 [2024-10-13 14:35:32.046464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.380 qpair failed and we were unable to recover it. 00:39:28.380 [2024-10-13 14:35:32.056408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.380 [2024-10-13 14:35:32.056485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.380 [2024-10-13 14:35:32.056498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.380 [2024-10-13 14:35:32.056505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.380 [2024-10-13 14:35:32.056511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.380 [2024-10-13 14:35:32.056524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.380 qpair failed and we were unable to recover it. 00:39:28.380 [2024-10-13 14:35:32.066280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.380 [2024-10-13 14:35:32.066338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.380 [2024-10-13 14:35:32.066351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.380 [2024-10-13 14:35:32.066357] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.380 [2024-10-13 14:35:32.066363] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.380 [2024-10-13 14:35:32.066376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.380 qpair failed and we were unable to recover it. 00:39:28.380 [2024-10-13 14:35:32.076365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.380 [2024-10-13 14:35:32.076408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.380 [2024-10-13 14:35:32.076421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.380 [2024-10-13 14:35:32.076428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.380 [2024-10-13 14:35:32.076434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.380 [2024-10-13 14:35:32.076447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.380 qpair failed and we were unable to recover it. 00:39:28.642 [2024-10-13 14:35:32.086383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.642 [2024-10-13 14:35:32.086430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.642 [2024-10-13 14:35:32.086443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.642 [2024-10-13 14:35:32.086450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.642 [2024-10-13 14:35:32.086456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.642 [2024-10-13 14:35:32.086469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.642 qpair failed and we were unable to recover it. 00:39:28.642 [2024-10-13 14:35:32.096446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.642 [2024-10-13 14:35:32.096496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.642 [2024-10-13 14:35:32.096509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.642 [2024-10-13 14:35:32.096516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.642 [2024-10-13 14:35:32.096527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.642 [2024-10-13 14:35:32.096542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.642 qpair failed and we were unable to recover it. 00:39:28.642 [2024-10-13 14:35:32.106384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.642 [2024-10-13 14:35:32.106426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.642 [2024-10-13 14:35:32.106439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.642 [2024-10-13 14:35:32.106446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.642 [2024-10-13 14:35:32.106452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.642 [2024-10-13 14:35:32.106465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.642 qpair failed and we were unable to recover it. 00:39:28.642 [2024-10-13 14:35:32.116401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.642 [2024-10-13 14:35:32.116443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.643 [2024-10-13 14:35:32.116456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.643 [2024-10-13 14:35:32.116463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.643 [2024-10-13 14:35:32.116469] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.643 [2024-10-13 14:35:32.116482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.643 qpair failed and we were unable to recover it. 00:39:28.643 [2024-10-13 14:35:32.126391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.643 [2024-10-13 14:35:32.126436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.643 [2024-10-13 14:35:32.126449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.643 [2024-10-13 14:35:32.126456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.643 [2024-10-13 14:35:32.126462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.643 [2024-10-13 14:35:32.126475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.643 qpair failed and we were unable to recover it. 00:39:28.643 [2024-10-13 14:35:32.136449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.643 [2024-10-13 14:35:32.136502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.643 [2024-10-13 14:35:32.136515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.643 [2024-10-13 14:35:32.136521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.643 [2024-10-13 14:35:32.136527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.643 [2024-10-13 14:35:32.136541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.643 qpair failed and we were unable to recover it. 00:39:28.643 [2024-10-13 14:35:32.146405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.643 [2024-10-13 14:35:32.146445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.643 [2024-10-13 14:35:32.146458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.643 [2024-10-13 14:35:32.146465] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.643 [2024-10-13 14:35:32.146471] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.643 [2024-10-13 14:35:32.146484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.643 qpair failed and we were unable to recover it. 00:39:28.643 [2024-10-13 14:35:32.156424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.643 [2024-10-13 14:35:32.156505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.643 [2024-10-13 14:35:32.156518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.643 [2024-10-13 14:35:32.156525] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.643 [2024-10-13 14:35:32.156531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.643 [2024-10-13 14:35:32.156544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.643 qpair failed and we were unable to recover it. 00:39:28.643 [2024-10-13 14:35:32.166412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.643 [2024-10-13 14:35:32.166455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.643 [2024-10-13 14:35:32.166468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.643 [2024-10-13 14:35:32.166475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.643 [2024-10-13 14:35:32.166481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.643 [2024-10-13 14:35:32.166494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.643 qpair failed and we were unable to recover it. 00:39:28.643 [2024-10-13 14:35:32.176470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.643 [2024-10-13 14:35:32.176520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.643 [2024-10-13 14:35:32.176533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.643 [2024-10-13 14:35:32.176540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.643 [2024-10-13 14:35:32.176546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.643 [2024-10-13 14:35:32.176559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.643 qpair failed and we were unable to recover it. 00:39:28.643 [2024-10-13 14:35:32.186379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.643 [2024-10-13 14:35:32.186423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.643 [2024-10-13 14:35:32.186435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.643 [2024-10-13 14:35:32.186449] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.643 [2024-10-13 14:35:32.186455] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.643 [2024-10-13 14:35:32.186469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.643 qpair failed and we were unable to recover it. 00:39:28.643 [2024-10-13 14:35:32.196382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.643 [2024-10-13 14:35:32.196425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.643 [2024-10-13 14:35:32.196438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.643 [2024-10-13 14:35:32.196444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.643 [2024-10-13 14:35:32.196451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.643 [2024-10-13 14:35:32.196464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.643 qpair failed and we were unable to recover it. 00:39:28.643 [2024-10-13 14:35:32.206405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.643 [2024-10-13 14:35:32.206451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.643 [2024-10-13 14:35:32.206464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.643 [2024-10-13 14:35:32.206471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.643 [2024-10-13 14:35:32.206477] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.643 [2024-10-13 14:35:32.206491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.643 qpair failed and we were unable to recover it. 00:39:28.643 [2024-10-13 14:35:32.216354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.643 [2024-10-13 14:35:32.216408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.643 [2024-10-13 14:35:32.216421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.643 [2024-10-13 14:35:32.216428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.643 [2024-10-13 14:35:32.216434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.643 [2024-10-13 14:35:32.216447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.643 qpair failed and we were unable to recover it. 00:39:28.643 [2024-10-13 14:35:32.226434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.643 [2024-10-13 14:35:32.226478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.643 [2024-10-13 14:35:32.226491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.643 [2024-10-13 14:35:32.226498] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.643 [2024-10-13 14:35:32.226504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.643 [2024-10-13 14:35:32.226517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.643 qpair failed and we were unable to recover it. 00:39:28.643 [2024-10-13 14:35:32.236428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.643 [2024-10-13 14:35:32.236471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.643 [2024-10-13 14:35:32.236484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.643 [2024-10-13 14:35:32.236490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.643 [2024-10-13 14:35:32.236496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.643 [2024-10-13 14:35:32.236509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.643 qpair failed and we were unable to recover it. 00:39:28.643 [2024-10-13 14:35:32.246446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.643 [2024-10-13 14:35:32.246496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.643 [2024-10-13 14:35:32.246509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.643 [2024-10-13 14:35:32.246515] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.643 [2024-10-13 14:35:32.246522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.643 [2024-10-13 14:35:32.246535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.643 qpair failed and we were unable to recover it. 00:39:28.643 [2024-10-13 14:35:32.256483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.643 [2024-10-13 14:35:32.256532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.643 [2024-10-13 14:35:32.256544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.643 [2024-10-13 14:35:32.256551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.644 [2024-10-13 14:35:32.256557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.644 [2024-10-13 14:35:32.256571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.644 qpair failed and we were unable to recover it. 00:39:28.644 [2024-10-13 14:35:32.266413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.644 [2024-10-13 14:35:32.266457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.644 [2024-10-13 14:35:32.266470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.644 [2024-10-13 14:35:32.266476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.644 [2024-10-13 14:35:32.266482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.644 [2024-10-13 14:35:32.266496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.644 qpair failed and we were unable to recover it. 00:39:28.644 [2024-10-13 14:35:32.276309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.644 [2024-10-13 14:35:32.276352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.644 [2024-10-13 14:35:32.276365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.644 [2024-10-13 14:35:32.276375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.644 [2024-10-13 14:35:32.276381] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.644 [2024-10-13 14:35:32.276394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.644 qpair failed and we were unable to recover it. 00:39:28.644 [2024-10-13 14:35:32.286323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.644 [2024-10-13 14:35:32.286371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.644 [2024-10-13 14:35:32.286384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.644 [2024-10-13 14:35:32.286391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.644 [2024-10-13 14:35:32.286397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.644 [2024-10-13 14:35:32.286410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.644 qpair failed and we were unable to recover it. 00:39:28.644 [2024-10-13 14:35:32.296513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.644 [2024-10-13 14:35:32.296566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.644 [2024-10-13 14:35:32.296579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.644 [2024-10-13 14:35:32.296586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.644 [2024-10-13 14:35:32.296592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.644 [2024-10-13 14:35:32.296605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.644 qpair failed and we were unable to recover it. 00:39:28.644 [2024-10-13 14:35:32.306469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.644 [2024-10-13 14:35:32.306513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.644 [2024-10-13 14:35:32.306525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.644 [2024-10-13 14:35:32.306532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.644 [2024-10-13 14:35:32.306538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.644 [2024-10-13 14:35:32.306551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.644 qpair failed and we were unable to recover it. 00:39:28.644 [2024-10-13 14:35:32.316429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.644 [2024-10-13 14:35:32.316475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.644 [2024-10-13 14:35:32.316488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.644 [2024-10-13 14:35:32.316494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.644 [2024-10-13 14:35:32.316501] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.644 [2024-10-13 14:35:32.316514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.644 qpair failed and we were unable to recover it. 00:39:28.644 [2024-10-13 14:35:32.326474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.644 [2024-10-13 14:35:32.326523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.644 [2024-10-13 14:35:32.326535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.644 [2024-10-13 14:35:32.326542] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.644 [2024-10-13 14:35:32.326548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.644 [2024-10-13 14:35:32.326562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.644 qpair failed and we were unable to recover it. 00:39:28.644 [2024-10-13 14:35:32.336385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.644 [2024-10-13 14:35:32.336463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.644 [2024-10-13 14:35:32.336476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.644 [2024-10-13 14:35:32.336483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.644 [2024-10-13 14:35:32.336489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.644 [2024-10-13 14:35:32.336503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.644 qpair failed and we were unable to recover it. 00:39:28.644 [2024-10-13 14:35:32.346383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.644 [2024-10-13 14:35:32.346422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.644 [2024-10-13 14:35:32.346435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.644 [2024-10-13 14:35:32.346441] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.644 [2024-10-13 14:35:32.346448] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.644 [2024-10-13 14:35:32.346461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.644 qpair failed and we were unable to recover it. 00:39:28.907 [2024-10-13 14:35:32.356488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.907 [2024-10-13 14:35:32.356529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.907 [2024-10-13 14:35:32.356542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.907 [2024-10-13 14:35:32.356549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.907 [2024-10-13 14:35:32.356555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.907 [2024-10-13 14:35:32.356569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.907 qpair failed and we were unable to recover it. 00:39:28.907 [2024-10-13 14:35:32.366492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.907 [2024-10-13 14:35:32.366537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.907 [2024-10-13 14:35:32.366554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.907 [2024-10-13 14:35:32.366561] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.907 [2024-10-13 14:35:32.366567] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.907 [2024-10-13 14:35:32.366580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.907 qpair failed and we were unable to recover it. 00:39:28.907 [2024-10-13 14:35:32.376503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.907 [2024-10-13 14:35:32.376565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.907 [2024-10-13 14:35:32.376578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.907 [2024-10-13 14:35:32.376586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.907 [2024-10-13 14:35:32.376592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.907 [2024-10-13 14:35:32.376605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.907 qpair failed and we were unable to recover it. 00:39:28.907 [2024-10-13 14:35:32.386484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.907 [2024-10-13 14:35:32.386527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.907 [2024-10-13 14:35:32.386540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.907 [2024-10-13 14:35:32.386547] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.907 [2024-10-13 14:35:32.386553] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.907 [2024-10-13 14:35:32.386566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.907 qpair failed and we were unable to recover it. 00:39:28.907 [2024-10-13 14:35:32.396361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.907 [2024-10-13 14:35:32.396406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.907 [2024-10-13 14:35:32.396419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.907 [2024-10-13 14:35:32.396425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.907 [2024-10-13 14:35:32.396432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.907 [2024-10-13 14:35:32.396445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.907 qpair failed and we were unable to recover it. 00:39:28.907 [2024-10-13 14:35:32.406500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.907 [2024-10-13 14:35:32.406549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.907 [2024-10-13 14:35:32.406562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.907 [2024-10-13 14:35:32.406569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.907 [2024-10-13 14:35:32.406575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.907 [2024-10-13 14:35:32.406592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.907 qpair failed and we were unable to recover it. 00:39:28.907 [2024-10-13 14:35:32.416545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.907 [2024-10-13 14:35:32.416602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.907 [2024-10-13 14:35:32.416616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.907 [2024-10-13 14:35:32.416623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.907 [2024-10-13 14:35:32.416629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.907 [2024-10-13 14:35:32.416645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.907 qpair failed and we were unable to recover it. 00:39:28.907 [2024-10-13 14:35:32.426436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.907 [2024-10-13 14:35:32.426479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.907 [2024-10-13 14:35:32.426492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.907 [2024-10-13 14:35:32.426499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.907 [2024-10-13 14:35:32.426505] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.907 [2024-10-13 14:35:32.426519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.907 qpair failed and we were unable to recover it. 00:39:28.907 [2024-10-13 14:35:32.436521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.907 [2024-10-13 14:35:32.436569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.907 [2024-10-13 14:35:32.436582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.907 [2024-10-13 14:35:32.436589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.907 [2024-10-13 14:35:32.436595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.907 [2024-10-13 14:35:32.436608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.907 qpair failed and we were unable to recover it. 00:39:28.907 [2024-10-13 14:35:32.446405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.907 [2024-10-13 14:35:32.446453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.907 [2024-10-13 14:35:32.446466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.907 [2024-10-13 14:35:32.446473] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.907 [2024-10-13 14:35:32.446479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.907 [2024-10-13 14:35:32.446495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.907 qpair failed and we were unable to recover it. 00:39:28.907 [2024-10-13 14:35:32.456575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.907 [2024-10-13 14:35:32.456631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.907 [2024-10-13 14:35:32.456649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.907 [2024-10-13 14:35:32.456656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.907 [2024-10-13 14:35:32.456663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.907 [2024-10-13 14:35:32.456681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.907 qpair failed and we were unable to recover it. 00:39:28.907 [2024-10-13 14:35:32.466529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.907 [2024-10-13 14:35:32.466622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.907 [2024-10-13 14:35:32.466635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.907 [2024-10-13 14:35:32.466642] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.907 [2024-10-13 14:35:32.466649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.908 [2024-10-13 14:35:32.466662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.908 qpair failed and we were unable to recover it. 00:39:28.908 [2024-10-13 14:35:32.476521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.908 [2024-10-13 14:35:32.476570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.908 [2024-10-13 14:35:32.476583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.908 [2024-10-13 14:35:32.476590] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.908 [2024-10-13 14:35:32.476596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.908 [2024-10-13 14:35:32.476609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.908 qpair failed and we were unable to recover it. 00:39:28.908 [2024-10-13 14:35:32.486407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.908 [2024-10-13 14:35:32.486452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.908 [2024-10-13 14:35:32.486465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.908 [2024-10-13 14:35:32.486471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.908 [2024-10-13 14:35:32.486477] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.908 [2024-10-13 14:35:32.486490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.908 qpair failed and we were unable to recover it. 00:39:28.908 [2024-10-13 14:35:32.496554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.908 [2024-10-13 14:35:32.496608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.908 [2024-10-13 14:35:32.496621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.908 [2024-10-13 14:35:32.496627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.908 [2024-10-13 14:35:32.496634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.908 [2024-10-13 14:35:32.496651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.908 qpair failed and we were unable to recover it. 00:39:28.908 [2024-10-13 14:35:32.506543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.908 [2024-10-13 14:35:32.506585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.908 [2024-10-13 14:35:32.506597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.908 [2024-10-13 14:35:32.506604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.908 [2024-10-13 14:35:32.506610] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.908 [2024-10-13 14:35:32.506623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.908 qpair failed and we were unable to recover it. 00:39:28.908 [2024-10-13 14:35:32.516551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.908 [2024-10-13 14:35:32.516600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.908 [2024-10-13 14:35:32.516613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.908 [2024-10-13 14:35:32.516620] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.908 [2024-10-13 14:35:32.516626] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.908 [2024-10-13 14:35:32.516640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.908 qpair failed and we were unable to recover it. 00:39:28.908 [2024-10-13 14:35:32.526561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.908 [2024-10-13 14:35:32.526604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.908 [2024-10-13 14:35:32.526617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.908 [2024-10-13 14:35:32.526623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.908 [2024-10-13 14:35:32.526630] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.908 [2024-10-13 14:35:32.526643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.908 qpair failed and we were unable to recover it. 00:39:28.908 [2024-10-13 14:35:32.536612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.908 [2024-10-13 14:35:32.536666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.908 [2024-10-13 14:35:32.536679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.908 [2024-10-13 14:35:32.536686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.908 [2024-10-13 14:35:32.536692] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.908 [2024-10-13 14:35:32.536705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.908 qpair failed and we were unable to recover it. 00:39:28.908 [2024-10-13 14:35:32.546576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.908 [2024-10-13 14:35:32.546620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.908 [2024-10-13 14:35:32.546636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.908 [2024-10-13 14:35:32.546643] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.908 [2024-10-13 14:35:32.546649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.908 [2024-10-13 14:35:32.546662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.908 qpair failed and we were unable to recover it. 00:39:28.908 [2024-10-13 14:35:32.556574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.908 [2024-10-13 14:35:32.556615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.908 [2024-10-13 14:35:32.556628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.908 [2024-10-13 14:35:32.556635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.908 [2024-10-13 14:35:32.556641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.908 [2024-10-13 14:35:32.556654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.908 qpair failed and we were unable to recover it. 00:39:28.908 [2024-10-13 14:35:32.566570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.908 [2024-10-13 14:35:32.566620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.908 [2024-10-13 14:35:32.566633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.908 [2024-10-13 14:35:32.566639] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.908 [2024-10-13 14:35:32.566646] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.908 [2024-10-13 14:35:32.566658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.908 qpair failed and we were unable to recover it. 00:39:28.908 [2024-10-13 14:35:32.576632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.908 [2024-10-13 14:35:32.576689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.908 [2024-10-13 14:35:32.576703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.908 [2024-10-13 14:35:32.576710] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.908 [2024-10-13 14:35:32.576716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.908 [2024-10-13 14:35:32.576735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.908 qpair failed and we were unable to recover it. 00:39:28.908 [2024-10-13 14:35:32.586586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.908 [2024-10-13 14:35:32.586644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.908 [2024-10-13 14:35:32.586657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.908 [2024-10-13 14:35:32.586664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.908 [2024-10-13 14:35:32.586673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.908 [2024-10-13 14:35:32.586687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.908 qpair failed and we were unable to recover it. 00:39:28.908 [2024-10-13 14:35:32.596551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.908 [2024-10-13 14:35:32.596598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.908 [2024-10-13 14:35:32.596610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.908 [2024-10-13 14:35:32.596617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.908 [2024-10-13 14:35:32.596623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.908 [2024-10-13 14:35:32.596637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.908 qpair failed and we were unable to recover it. 00:39:28.908 [2024-10-13 14:35:32.606582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:28.908 [2024-10-13 14:35:32.606627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:28.908 [2024-10-13 14:35:32.606640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:28.908 [2024-10-13 14:35:32.606647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:28.908 [2024-10-13 14:35:32.606653] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:28.908 [2024-10-13 14:35:32.606666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:28.908 qpair failed and we were unable to recover it. 00:39:29.171 [2024-10-13 14:35:32.616586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.171 [2024-10-13 14:35:32.616640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.171 [2024-10-13 14:35:32.616653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.171 [2024-10-13 14:35:32.616659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.171 [2024-10-13 14:35:32.616666] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.171 [2024-10-13 14:35:32.616679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.171 qpair failed and we were unable to recover it. 00:39:29.171 [2024-10-13 14:35:32.626596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.171 [2024-10-13 14:35:32.626640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.171 [2024-10-13 14:35:32.626653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.171 [2024-10-13 14:35:32.626660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.171 [2024-10-13 14:35:32.626666] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.171 [2024-10-13 14:35:32.626679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.171 qpair failed and we were unable to recover it. 00:39:29.171 [2024-10-13 14:35:32.636604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.171 [2024-10-13 14:35:32.636651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.171 [2024-10-13 14:35:32.636665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.171 [2024-10-13 14:35:32.636671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.171 [2024-10-13 14:35:32.636677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.171 [2024-10-13 14:35:32.636691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.171 qpair failed and we were unable to recover it. 00:39:29.171 [2024-10-13 14:35:32.646615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.171 [2024-10-13 14:35:32.646667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.171 [2024-10-13 14:35:32.646680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.171 [2024-10-13 14:35:32.646686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.171 [2024-10-13 14:35:32.646692] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.171 [2024-10-13 14:35:32.646706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.171 qpair failed and we were unable to recover it. 00:39:29.171 [2024-10-13 14:35:32.656675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.171 [2024-10-13 14:35:32.656723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.171 [2024-10-13 14:35:32.656736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.171 [2024-10-13 14:35:32.656743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.171 [2024-10-13 14:35:32.656749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.171 [2024-10-13 14:35:32.656763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.171 qpair failed and we were unable to recover it. 00:39:29.171 [2024-10-13 14:35:32.666526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.171 [2024-10-13 14:35:32.666566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.171 [2024-10-13 14:35:32.666579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.171 [2024-10-13 14:35:32.666585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.171 [2024-10-13 14:35:32.666591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.171 [2024-10-13 14:35:32.666605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.171 qpair failed and we were unable to recover it. 00:39:29.171 [2024-10-13 14:35:32.676595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.171 [2024-10-13 14:35:32.676639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.171 [2024-10-13 14:35:32.676652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.171 [2024-10-13 14:35:32.676658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.171 [2024-10-13 14:35:32.676668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.171 [2024-10-13 14:35:32.676681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.171 qpair failed and we were unable to recover it. 00:39:29.171 [2024-10-13 14:35:32.686500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.171 [2024-10-13 14:35:32.686547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.171 [2024-10-13 14:35:32.686560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.171 [2024-10-13 14:35:32.686567] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.171 [2024-10-13 14:35:32.686573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.171 [2024-10-13 14:35:32.686587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.171 qpair failed and we were unable to recover it. 00:39:29.172 [2024-10-13 14:35:32.696687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.172 [2024-10-13 14:35:32.696735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.172 [2024-10-13 14:35:32.696747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.172 [2024-10-13 14:35:32.696754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.172 [2024-10-13 14:35:32.696760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.172 [2024-10-13 14:35:32.696773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.172 qpair failed and we were unable to recover it. 00:39:29.172 [2024-10-13 14:35:32.706619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.172 [2024-10-13 14:35:32.706684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.172 [2024-10-13 14:35:32.706697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.172 [2024-10-13 14:35:32.706703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.172 [2024-10-13 14:35:32.706709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.172 [2024-10-13 14:35:32.706723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.172 qpair failed and we were unable to recover it. 00:39:29.172 [2024-10-13 14:35:32.716644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.172 [2024-10-13 14:35:32.716691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.172 [2024-10-13 14:35:32.716703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.172 [2024-10-13 14:35:32.716710] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.172 [2024-10-13 14:35:32.716716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.172 [2024-10-13 14:35:32.716730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.172 qpair failed and we were unable to recover it. 00:39:29.172 [2024-10-13 14:35:32.726564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.172 [2024-10-13 14:35:32.726609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.172 [2024-10-13 14:35:32.726621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.172 [2024-10-13 14:35:32.726628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.172 [2024-10-13 14:35:32.726634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.172 [2024-10-13 14:35:32.726648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.172 qpair failed and we were unable to recover it. 00:39:29.172 [2024-10-13 14:35:32.736560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.172 [2024-10-13 14:35:32.736615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.172 [2024-10-13 14:35:32.736629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.172 [2024-10-13 14:35:32.736636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.172 [2024-10-13 14:35:32.736642] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.172 [2024-10-13 14:35:32.736656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.172 qpair failed and we were unable to recover it. 00:39:29.172 [2024-10-13 14:35:32.746626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.172 [2024-10-13 14:35:32.746668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.172 [2024-10-13 14:35:32.746682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.172 [2024-10-13 14:35:32.746689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.172 [2024-10-13 14:35:32.746695] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.172 [2024-10-13 14:35:32.746708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.172 qpair failed and we were unable to recover it. 00:39:29.172 [2024-10-13 14:35:32.756635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.172 [2024-10-13 14:35:32.756681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.172 [2024-10-13 14:35:32.756705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.172 [2024-10-13 14:35:32.756714] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.172 [2024-10-13 14:35:32.756721] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.172 [2024-10-13 14:35:32.756739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.172 qpair failed and we were unable to recover it. 00:39:29.172 [2024-10-13 14:35:32.766639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.172 [2024-10-13 14:35:32.766691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.172 [2024-10-13 14:35:32.766715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.172 [2024-10-13 14:35:32.766728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.172 [2024-10-13 14:35:32.766735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.172 [2024-10-13 14:35:32.766754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.172 qpair failed and we were unable to recover it. 00:39:29.172 [2024-10-13 14:35:32.776656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.172 [2024-10-13 14:35:32.776711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.172 [2024-10-13 14:35:32.776725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.172 [2024-10-13 14:35:32.776732] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.172 [2024-10-13 14:35:32.776739] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.172 [2024-10-13 14:35:32.776753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.172 qpair failed and we were unable to recover it. 00:39:29.172 [2024-10-13 14:35:32.786614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.172 [2024-10-13 14:35:32.786657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.172 [2024-10-13 14:35:32.786671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.172 [2024-10-13 14:35:32.786678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.172 [2024-10-13 14:35:32.786684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.172 [2024-10-13 14:35:32.786698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.172 qpair failed and we were unable to recover it. 00:39:29.172 [2024-10-13 14:35:32.796658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.172 [2024-10-13 14:35:32.796700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.172 [2024-10-13 14:35:32.796716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.172 [2024-10-13 14:35:32.796723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.172 [2024-10-13 14:35:32.796729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.172 [2024-10-13 14:35:32.796744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.172 qpair failed and we were unable to recover it. 00:39:29.172 [2024-10-13 14:35:32.806631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.172 [2024-10-13 14:35:32.806677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.172 [2024-10-13 14:35:32.806690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.172 [2024-10-13 14:35:32.806699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.172 [2024-10-13 14:35:32.806705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.172 [2024-10-13 14:35:32.806719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.172 qpair failed and we were unable to recover it. 00:39:29.172 [2024-10-13 14:35:32.816665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.172 [2024-10-13 14:35:32.816718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.172 [2024-10-13 14:35:32.816731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.172 [2024-10-13 14:35:32.816738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.172 [2024-10-13 14:35:32.816744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.172 [2024-10-13 14:35:32.816758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.172 qpair failed and we were unable to recover it. 00:39:29.172 [2024-10-13 14:35:32.826543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.172 [2024-10-13 14:35:32.826590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.172 [2024-10-13 14:35:32.826603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.172 [2024-10-13 14:35:32.826610] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.172 [2024-10-13 14:35:32.826616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.173 [2024-10-13 14:35:32.826630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.173 qpair failed and we were unable to recover it. 00:39:29.173 [2024-10-13 14:35:32.836688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.173 [2024-10-13 14:35:32.836737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.173 [2024-10-13 14:35:32.836750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.173 [2024-10-13 14:35:32.836757] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.173 [2024-10-13 14:35:32.836763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.173 [2024-10-13 14:35:32.836776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.173 qpair failed and we were unable to recover it. 00:39:29.173 [2024-10-13 14:35:32.846727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.173 [2024-10-13 14:35:32.846777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.173 [2024-10-13 14:35:32.846791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.173 [2024-10-13 14:35:32.846798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.173 [2024-10-13 14:35:32.846806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.173 [2024-10-13 14:35:32.846820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.173 qpair failed and we were unable to recover it. 00:39:29.173 [2024-10-13 14:35:32.856655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.173 [2024-10-13 14:35:32.856708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.173 [2024-10-13 14:35:32.856736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.173 [2024-10-13 14:35:32.856745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.173 [2024-10-13 14:35:32.856752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.173 [2024-10-13 14:35:32.856771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.173 qpair failed and we were unable to recover it. 00:39:29.173 [2024-10-13 14:35:32.866676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.173 [2024-10-13 14:35:32.866732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.173 [2024-10-13 14:35:32.866747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.173 [2024-10-13 14:35:32.866755] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.173 [2024-10-13 14:35:32.866761] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.173 [2024-10-13 14:35:32.866776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.173 qpair failed and we were unable to recover it. 00:39:29.440 [2024-10-13 14:35:32.876551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.440 [2024-10-13 14:35:32.876592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.440 [2024-10-13 14:35:32.876606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.440 [2024-10-13 14:35:32.876613] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.440 [2024-10-13 14:35:32.876619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.440 [2024-10-13 14:35:32.876633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.440 qpair failed and we were unable to recover it. 00:39:29.440 [2024-10-13 14:35:32.886679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.440 [2024-10-13 14:35:32.886734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.440 [2024-10-13 14:35:32.886747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.440 [2024-10-13 14:35:32.886754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.440 [2024-10-13 14:35:32.886760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.440 [2024-10-13 14:35:32.886773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.440 qpair failed and we were unable to recover it. 00:39:29.440 [2024-10-13 14:35:32.896684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.440 [2024-10-13 14:35:32.896745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.440 [2024-10-13 14:35:32.896769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.440 [2024-10-13 14:35:32.896778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.440 [2024-10-13 14:35:32.896785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.440 [2024-10-13 14:35:32.896804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.440 qpair failed and we were unable to recover it. 00:39:29.440 [2024-10-13 14:35:32.906696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.440 [2024-10-13 14:35:32.906744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.440 [2024-10-13 14:35:32.906769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.440 [2024-10-13 14:35:32.906777] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.440 [2024-10-13 14:35:32.906784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.440 [2024-10-13 14:35:32.906803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.440 qpair failed and we were unable to recover it. 00:39:29.440 [2024-10-13 14:35:32.916707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.440 [2024-10-13 14:35:32.916758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.440 [2024-10-13 14:35:32.916782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.440 [2024-10-13 14:35:32.916791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.440 [2024-10-13 14:35:32.916797] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.440 [2024-10-13 14:35:32.916816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.440 qpair failed and we were unable to recover it. 00:39:29.440 [2024-10-13 14:35:32.926581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.440 [2024-10-13 14:35:32.926632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.440 [2024-10-13 14:35:32.926646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.440 [2024-10-13 14:35:32.926653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.440 [2024-10-13 14:35:32.926660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.440 [2024-10-13 14:35:32.926674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.440 qpair failed and we were unable to recover it. 00:39:29.440 [2024-10-13 14:35:32.936719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.441 [2024-10-13 14:35:32.936768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.441 [2024-10-13 14:35:32.936785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.441 [2024-10-13 14:35:32.936792] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.441 [2024-10-13 14:35:32.936798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.441 [2024-10-13 14:35:32.936813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.441 qpair failed and we were unable to recover it. 00:39:29.441 [2024-10-13 14:35:32.946714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.441 [2024-10-13 14:35:32.946759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.441 [2024-10-13 14:35:32.946787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.441 [2024-10-13 14:35:32.946796] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.441 [2024-10-13 14:35:32.946803] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.441 [2024-10-13 14:35:32.946821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.441 qpair failed and we were unable to recover it. 00:39:29.441 [2024-10-13 14:35:32.956720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.441 [2024-10-13 14:35:32.956770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.441 [2024-10-13 14:35:32.956794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.441 [2024-10-13 14:35:32.956803] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.441 [2024-10-13 14:35:32.956810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.441 [2024-10-13 14:35:32.956828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.441 qpair failed and we were unable to recover it. 00:39:29.441 [2024-10-13 14:35:32.966736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.441 [2024-10-13 14:35:32.966780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.441 [2024-10-13 14:35:32.966795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.441 [2024-10-13 14:35:32.966802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.441 [2024-10-13 14:35:32.966808] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.441 [2024-10-13 14:35:32.966823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.441 qpair failed and we were unable to recover it. 00:39:29.441 [2024-10-13 14:35:32.976728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.441 [2024-10-13 14:35:32.976781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.441 [2024-10-13 14:35:32.976795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.441 [2024-10-13 14:35:32.976802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.441 [2024-10-13 14:35:32.976808] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.441 [2024-10-13 14:35:32.976822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.441 qpair failed and we were unable to recover it. 00:39:29.441 [2024-10-13 14:35:32.986731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.441 [2024-10-13 14:35:32.986782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.441 [2024-10-13 14:35:32.986806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.441 [2024-10-13 14:35:32.986814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.441 [2024-10-13 14:35:32.986822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.441 [2024-10-13 14:35:32.986844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.441 qpair failed and we were unable to recover it. 00:39:29.441 [2024-10-13 14:35:32.996726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.441 [2024-10-13 14:35:32.996777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.441 [2024-10-13 14:35:32.996792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.441 [2024-10-13 14:35:32.996799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.441 [2024-10-13 14:35:32.996805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.441 [2024-10-13 14:35:32.996820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.441 qpair failed and we were unable to recover it. 00:39:29.441 [2024-10-13 14:35:33.006738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.441 [2024-10-13 14:35:33.006786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.441 [2024-10-13 14:35:33.006800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.441 [2024-10-13 14:35:33.006807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.441 [2024-10-13 14:35:33.006813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.441 [2024-10-13 14:35:33.006827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.441 qpair failed and we were unable to recover it. 00:39:29.441 [2024-10-13 14:35:33.016751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.441 [2024-10-13 14:35:33.016820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.441 [2024-10-13 14:35:33.016833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.441 [2024-10-13 14:35:33.016840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.441 [2024-10-13 14:35:33.016847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.441 [2024-10-13 14:35:33.016860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.441 qpair failed and we were unable to recover it. 00:39:29.441 [2024-10-13 14:35:33.026616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.441 [2024-10-13 14:35:33.026664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.441 [2024-10-13 14:35:33.026678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.441 [2024-10-13 14:35:33.026685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.441 [2024-10-13 14:35:33.026691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.441 [2024-10-13 14:35:33.026706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.441 qpair failed and we were unable to recover it. 00:39:29.441 [2024-10-13 14:35:33.036748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.441 [2024-10-13 14:35:33.036790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.441 [2024-10-13 14:35:33.036807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.441 [2024-10-13 14:35:33.036814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.441 [2024-10-13 14:35:33.036820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.441 [2024-10-13 14:35:33.036834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.441 qpair failed and we were unable to recover it. 00:39:29.441 [2024-10-13 14:35:33.046625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.441 [2024-10-13 14:35:33.046669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.441 [2024-10-13 14:35:33.046682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.441 [2024-10-13 14:35:33.046689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.441 [2024-10-13 14:35:33.046695] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.441 [2024-10-13 14:35:33.046709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.441 qpair failed and we were unable to recover it. 00:39:29.441 [2024-10-13 14:35:33.056754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.441 [2024-10-13 14:35:33.056802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.441 [2024-10-13 14:35:33.056815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.441 [2024-10-13 14:35:33.056822] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.441 [2024-10-13 14:35:33.056828] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.441 [2024-10-13 14:35:33.056842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.441 qpair failed and we were unable to recover it. 00:39:29.441 [2024-10-13 14:35:33.066766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.441 [2024-10-13 14:35:33.066815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.441 [2024-10-13 14:35:33.066839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.441 [2024-10-13 14:35:33.066847] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.441 [2024-10-13 14:35:33.066855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.441 [2024-10-13 14:35:33.066873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.441 qpair failed and we were unable to recover it. 00:39:29.441 [2024-10-13 14:35:33.076775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.441 [2024-10-13 14:35:33.076828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.441 [2024-10-13 14:35:33.076852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.442 [2024-10-13 14:35:33.076860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.442 [2024-10-13 14:35:33.076872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.442 [2024-10-13 14:35:33.076891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.442 qpair failed and we were unable to recover it. 00:39:29.442 [2024-10-13 14:35:33.086761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.442 [2024-10-13 14:35:33.086814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.442 [2024-10-13 14:35:33.086838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.442 [2024-10-13 14:35:33.086847] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.442 [2024-10-13 14:35:33.086854] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.442 [2024-10-13 14:35:33.086872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.442 qpair failed and we were unable to recover it. 00:39:29.442 [2024-10-13 14:35:33.096748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.442 [2024-10-13 14:35:33.096803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.442 [2024-10-13 14:35:33.096819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.442 [2024-10-13 14:35:33.096827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.442 [2024-10-13 14:35:33.096834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.442 [2024-10-13 14:35:33.096850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.442 qpair failed and we were unable to recover it. 00:39:29.442 [2024-10-13 14:35:33.106768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.442 [2024-10-13 14:35:33.106812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.442 [2024-10-13 14:35:33.106825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.442 [2024-10-13 14:35:33.106832] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.442 [2024-10-13 14:35:33.106838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.442 [2024-10-13 14:35:33.106852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.442 qpair failed and we were unable to recover it. 00:39:29.442 [2024-10-13 14:35:33.116766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.442 [2024-10-13 14:35:33.116807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.442 [2024-10-13 14:35:33.116820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.442 [2024-10-13 14:35:33.116827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.442 [2024-10-13 14:35:33.116833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.442 [2024-10-13 14:35:33.116847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.442 qpair failed and we were unable to recover it. 00:39:29.442 [2024-10-13 14:35:33.126780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.442 [2024-10-13 14:35:33.126828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.442 [2024-10-13 14:35:33.126841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.442 [2024-10-13 14:35:33.126848] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.442 [2024-10-13 14:35:33.126854] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.442 [2024-10-13 14:35:33.126867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.442 qpair failed and we were unable to recover it. 00:39:29.442 [2024-10-13 14:35:33.136784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.442 [2024-10-13 14:35:33.136831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.442 [2024-10-13 14:35:33.136844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.442 [2024-10-13 14:35:33.136851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.442 [2024-10-13 14:35:33.136858] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.442 [2024-10-13 14:35:33.136871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.442 qpair failed and we were unable to recover it. 00:39:29.733 [2024-10-13 14:35:33.146753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.733 [2024-10-13 14:35:33.146800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.733 [2024-10-13 14:35:33.146824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.733 [2024-10-13 14:35:33.146832] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.733 [2024-10-13 14:35:33.146839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.733 [2024-10-13 14:35:33.146859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.733 qpair failed and we were unable to recover it. 00:39:29.733 [2024-10-13 14:35:33.156784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.733 [2024-10-13 14:35:33.156842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.733 [2024-10-13 14:35:33.156867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.733 [2024-10-13 14:35:33.156875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.733 [2024-10-13 14:35:33.156882] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.733 [2024-10-13 14:35:33.156901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.733 qpair failed and we were unable to recover it. 00:39:29.733 [2024-10-13 14:35:33.166803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.733 [2024-10-13 14:35:33.166876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.733 [2024-10-13 14:35:33.166891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.733 [2024-10-13 14:35:33.166898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.733 [2024-10-13 14:35:33.166909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.733 [2024-10-13 14:35:33.166923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.733 qpair failed and we were unable to recover it. 00:39:29.733 [2024-10-13 14:35:33.176801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.733 [2024-10-13 14:35:33.176855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.733 [2024-10-13 14:35:33.176879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.733 [2024-10-13 14:35:33.176887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.733 [2024-10-13 14:35:33.176894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.733 [2024-10-13 14:35:33.176913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.733 qpair failed and we were unable to recover it. 00:39:29.733 [2024-10-13 14:35:33.186790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.733 [2024-10-13 14:35:33.186838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.733 [2024-10-13 14:35:33.186852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.733 [2024-10-13 14:35:33.186860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.733 [2024-10-13 14:35:33.186866] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.733 [2024-10-13 14:35:33.186880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.733 qpair failed and we were unable to recover it. 00:39:29.733 [2024-10-13 14:35:33.196779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.733 [2024-10-13 14:35:33.196822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.733 [2024-10-13 14:35:33.196835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.733 [2024-10-13 14:35:33.196842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.733 [2024-10-13 14:35:33.196848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.733 [2024-10-13 14:35:33.196862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.733 qpair failed and we were unable to recover it. 00:39:29.733 [2024-10-13 14:35:33.206792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.733 [2024-10-13 14:35:33.206839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.733 [2024-10-13 14:35:33.206852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.733 [2024-10-13 14:35:33.206859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.733 [2024-10-13 14:35:33.206865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.733 [2024-10-13 14:35:33.206878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.733 qpair failed and we were unable to recover it. 00:39:29.733 [2024-10-13 14:35:33.216823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.733 [2024-10-13 14:35:33.216872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.733 [2024-10-13 14:35:33.216885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.733 [2024-10-13 14:35:33.216892] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.733 [2024-10-13 14:35:33.216898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.733 [2024-10-13 14:35:33.216911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.733 qpair failed and we were unable to recover it. 00:39:29.733 [2024-10-13 14:35:33.226813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.733 [2024-10-13 14:35:33.226851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.733 [2024-10-13 14:35:33.226864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.733 [2024-10-13 14:35:33.226870] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.733 [2024-10-13 14:35:33.226876] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.733 [2024-10-13 14:35:33.226890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.733 qpair failed and we were unable to recover it. 00:39:29.733 [2024-10-13 14:35:33.236813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.734 [2024-10-13 14:35:33.236858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.734 [2024-10-13 14:35:33.236871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.734 [2024-10-13 14:35:33.236878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.734 [2024-10-13 14:35:33.236884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.734 [2024-10-13 14:35:33.236897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.734 qpair failed and we were unable to recover it. 00:39:29.734 [2024-10-13 14:35:33.246825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.734 [2024-10-13 14:35:33.246870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.734 [2024-10-13 14:35:33.246883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.734 [2024-10-13 14:35:33.246890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.734 [2024-10-13 14:35:33.246896] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.734 [2024-10-13 14:35:33.246910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.734 qpair failed and we were unable to recover it. 00:39:29.734 [2024-10-13 14:35:33.256840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.734 [2024-10-13 14:35:33.256890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.734 [2024-10-13 14:35:33.256904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.734 [2024-10-13 14:35:33.256914] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.734 [2024-10-13 14:35:33.256921] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.734 [2024-10-13 14:35:33.256935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.734 qpair failed and we were unable to recover it. 00:39:29.734 [2024-10-13 14:35:33.266802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.734 [2024-10-13 14:35:33.266848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.734 [2024-10-13 14:35:33.266861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.734 [2024-10-13 14:35:33.266868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.734 [2024-10-13 14:35:33.266874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.734 [2024-10-13 14:35:33.266887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.734 qpair failed and we were unable to recover it. 00:39:29.734 [2024-10-13 14:35:33.276834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.734 [2024-10-13 14:35:33.276877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.734 [2024-10-13 14:35:33.276890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.734 [2024-10-13 14:35:33.276897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.734 [2024-10-13 14:35:33.276903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.734 [2024-10-13 14:35:33.276917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.734 qpair failed and we were unable to recover it. 00:39:29.734 [2024-10-13 14:35:33.286837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.734 [2024-10-13 14:35:33.286920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.734 [2024-10-13 14:35:33.286933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.734 [2024-10-13 14:35:33.286940] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.734 [2024-10-13 14:35:33.286946] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.734 [2024-10-13 14:35:33.286959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.734 qpair failed and we were unable to recover it. 00:39:29.734 [2024-10-13 14:35:33.296852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.734 [2024-10-13 14:35:33.296904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.734 [2024-10-13 14:35:33.296917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.734 [2024-10-13 14:35:33.296924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.734 [2024-10-13 14:35:33.296930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.734 [2024-10-13 14:35:33.296943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.734 qpair failed and we were unable to recover it. 00:39:29.734 [2024-10-13 14:35:33.306713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.734 [2024-10-13 14:35:33.306757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.734 [2024-10-13 14:35:33.306771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.734 [2024-10-13 14:35:33.306778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.734 [2024-10-13 14:35:33.306784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.734 [2024-10-13 14:35:33.306798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.734 qpair failed and we were unable to recover it. 00:39:29.734 [2024-10-13 14:35:33.316817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.734 [2024-10-13 14:35:33.316863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.734 [2024-10-13 14:35:33.316877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.734 [2024-10-13 14:35:33.316884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.734 [2024-10-13 14:35:33.316890] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.734 [2024-10-13 14:35:33.316903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.734 qpair failed and we were unable to recover it. 00:39:29.734 [2024-10-13 14:35:33.326864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.734 [2024-10-13 14:35:33.326909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.734 [2024-10-13 14:35:33.326922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.734 [2024-10-13 14:35:33.326929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.734 [2024-10-13 14:35:33.326935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.734 [2024-10-13 14:35:33.326949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.734 qpair failed and we were unable to recover it. 00:39:29.734 [2024-10-13 14:35:33.336874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.734 [2024-10-13 14:35:33.336922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.734 [2024-10-13 14:35:33.336935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.734 [2024-10-13 14:35:33.336942] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.734 [2024-10-13 14:35:33.336948] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.734 [2024-10-13 14:35:33.336961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.734 qpair failed and we were unable to recover it. 00:39:29.734 [2024-10-13 14:35:33.346828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.734 [2024-10-13 14:35:33.346871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.734 [2024-10-13 14:35:33.346884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.734 [2024-10-13 14:35:33.346895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.734 [2024-10-13 14:35:33.346901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.734 [2024-10-13 14:35:33.346914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.734 qpair failed and we were unable to recover it. 00:39:29.734 [2024-10-13 14:35:33.356867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.734 [2024-10-13 14:35:33.356907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.734 [2024-10-13 14:35:33.356920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.734 [2024-10-13 14:35:33.356927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.734 [2024-10-13 14:35:33.356933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.734 [2024-10-13 14:35:33.356947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.734 qpair failed and we were unable to recover it. 00:39:29.734 [2024-10-13 14:35:33.366862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.734 [2024-10-13 14:35:33.366916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.734 [2024-10-13 14:35:33.366929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.734 [2024-10-13 14:35:33.366936] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.734 [2024-10-13 14:35:33.366942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.734 [2024-10-13 14:35:33.366955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.734 qpair failed and we were unable to recover it. 00:39:29.734 [2024-10-13 14:35:33.376877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.734 [2024-10-13 14:35:33.376923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.734 [2024-10-13 14:35:33.376936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.734 [2024-10-13 14:35:33.376942] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.734 [2024-10-13 14:35:33.376949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.734 [2024-10-13 14:35:33.376962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.734 qpair failed and we were unable to recover it. 00:39:29.734 [2024-10-13 14:35:33.386843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.734 [2024-10-13 14:35:33.386884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.734 [2024-10-13 14:35:33.386897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.734 [2024-10-13 14:35:33.386904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.735 [2024-10-13 14:35:33.386910] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.735 [2024-10-13 14:35:33.386923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.735 qpair failed and we were unable to recover it. 00:39:29.735 [2024-10-13 14:35:33.396746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.735 [2024-10-13 14:35:33.396790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.735 [2024-10-13 14:35:33.396802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.735 [2024-10-13 14:35:33.396809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.735 [2024-10-13 14:35:33.396815] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.735 [2024-10-13 14:35:33.396829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.735 qpair failed and we were unable to recover it. 00:39:29.735 [2024-10-13 14:35:33.406891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.735 [2024-10-13 14:35:33.406944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.735 [2024-10-13 14:35:33.406957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.735 [2024-10-13 14:35:33.406963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.735 [2024-10-13 14:35:33.406969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.735 [2024-10-13 14:35:33.406983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.735 qpair failed and we were unable to recover it. 00:39:29.735 [2024-10-13 14:35:33.416893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.735 [2024-10-13 14:35:33.416943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.735 [2024-10-13 14:35:33.416956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.735 [2024-10-13 14:35:33.416963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.735 [2024-10-13 14:35:33.416969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.735 [2024-10-13 14:35:33.416983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.735 qpair failed and we were unable to recover it. 00:39:29.735 [2024-10-13 14:35:33.426896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:29.735 [2024-10-13 14:35:33.426938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:29.735 [2024-10-13 14:35:33.426951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:29.735 [2024-10-13 14:35:33.426958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:29.735 [2024-10-13 14:35:33.426964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:29.735 [2024-10-13 14:35:33.426977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:29.735 qpair failed and we were unable to recover it. 00:39:30.050 [2024-10-13 14:35:33.436751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.050 [2024-10-13 14:35:33.436795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.050 [2024-10-13 14:35:33.436812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.050 [2024-10-13 14:35:33.436818] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.050 [2024-10-13 14:35:33.436825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.050 [2024-10-13 14:35:33.436838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.050 qpair failed and we were unable to recover it. 00:39:30.050 [2024-10-13 14:35:33.446904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.050 [2024-10-13 14:35:33.446947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.050 [2024-10-13 14:35:33.446960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.050 [2024-10-13 14:35:33.446967] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.050 [2024-10-13 14:35:33.446974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.050 [2024-10-13 14:35:33.446988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.050 qpair failed and we were unable to recover it. 00:39:30.050 [2024-10-13 14:35:33.456910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.050 [2024-10-13 14:35:33.456957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.050 [2024-10-13 14:35:33.456970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.050 [2024-10-13 14:35:33.456977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.050 [2024-10-13 14:35:33.456983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.050 [2024-10-13 14:35:33.456997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.050 qpair failed and we were unable to recover it. 00:39:30.050 [2024-10-13 14:35:33.466778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.050 [2024-10-13 14:35:33.466823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.050 [2024-10-13 14:35:33.466838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.050 [2024-10-13 14:35:33.466845] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.050 [2024-10-13 14:35:33.466851] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.050 [2024-10-13 14:35:33.466865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.050 qpair failed and we were unable to recover it. 00:39:30.050 [2024-10-13 14:35:33.476909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.050 [2024-10-13 14:35:33.476954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.050 [2024-10-13 14:35:33.476967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.050 [2024-10-13 14:35:33.476974] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.050 [2024-10-13 14:35:33.476981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.050 [2024-10-13 14:35:33.476998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.050 qpair failed and we were unable to recover it. 00:39:30.050 [2024-10-13 14:35:33.486911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.050 [2024-10-13 14:35:33.486959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.050 [2024-10-13 14:35:33.486972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.050 [2024-10-13 14:35:33.486979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.050 [2024-10-13 14:35:33.486985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.050 [2024-10-13 14:35:33.486999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.050 qpair failed and we were unable to recover it. 00:39:30.050 [2024-10-13 14:35:33.496902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.050 [2024-10-13 14:35:33.496948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.050 [2024-10-13 14:35:33.496961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.050 [2024-10-13 14:35:33.496968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.050 [2024-10-13 14:35:33.496974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.050 [2024-10-13 14:35:33.496988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.050 qpair failed and we were unable to recover it. 00:39:30.051 [2024-10-13 14:35:33.506936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.051 [2024-10-13 14:35:33.507026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.051 [2024-10-13 14:35:33.507039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.051 [2024-10-13 14:35:33.507046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.051 [2024-10-13 14:35:33.507052] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.051 [2024-10-13 14:35:33.507069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.051 qpair failed and we were unable to recover it. 00:39:30.051 [2024-10-13 14:35:33.516915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.051 [2024-10-13 14:35:33.517000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.051 [2024-10-13 14:35:33.517013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.051 [2024-10-13 14:35:33.517020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.051 [2024-10-13 14:35:33.517026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.051 [2024-10-13 14:35:33.517040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.051 qpair failed and we were unable to recover it. 00:39:30.051 [2024-10-13 14:35:33.526926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.051 [2024-10-13 14:35:33.526972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.051 [2024-10-13 14:35:33.526988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.051 [2024-10-13 14:35:33.526995] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.051 [2024-10-13 14:35:33.527001] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.051 [2024-10-13 14:35:33.527014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.051 qpair failed and we were unable to recover it. 00:39:30.051 [2024-10-13 14:35:33.536916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.051 [2024-10-13 14:35:33.536960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.051 [2024-10-13 14:35:33.536973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.051 [2024-10-13 14:35:33.536980] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.051 [2024-10-13 14:35:33.536987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.051 [2024-10-13 14:35:33.537000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.051 qpair failed and we were unable to recover it. 00:39:30.051 [2024-10-13 14:35:33.546902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.051 [2024-10-13 14:35:33.546946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.051 [2024-10-13 14:35:33.546959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.051 [2024-10-13 14:35:33.546965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.051 [2024-10-13 14:35:33.546972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.051 [2024-10-13 14:35:33.546985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.051 qpair failed and we were unable to recover it. 00:39:30.051 [2024-10-13 14:35:33.556939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.051 [2024-10-13 14:35:33.556985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.051 [2024-10-13 14:35:33.556999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.051 [2024-10-13 14:35:33.557005] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.051 [2024-10-13 14:35:33.557012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.051 [2024-10-13 14:35:33.557025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.051 qpair failed and we were unable to recover it. 00:39:30.051 [2024-10-13 14:35:33.566918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.051 [2024-10-13 14:35:33.566979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.051 [2024-10-13 14:35:33.566992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.051 [2024-10-13 14:35:33.566999] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.051 [2024-10-13 14:35:33.567009] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.051 [2024-10-13 14:35:33.567022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.051 qpair failed and we were unable to recover it. 00:39:30.051 [2024-10-13 14:35:33.576942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.051 [2024-10-13 14:35:33.576987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.051 [2024-10-13 14:35:33.577000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.051 [2024-10-13 14:35:33.577007] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.051 [2024-10-13 14:35:33.577013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.051 [2024-10-13 14:35:33.577028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.051 qpair failed and we were unable to recover it. 00:39:30.051 [2024-10-13 14:35:33.586924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.051 [2024-10-13 14:35:33.586974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.051 [2024-10-13 14:35:33.586987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.051 [2024-10-13 14:35:33.586993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.051 [2024-10-13 14:35:33.587000] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.051 [2024-10-13 14:35:33.587015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.051 qpair failed and we were unable to recover it. 00:39:30.051 [2024-10-13 14:35:33.596922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.051 [2024-10-13 14:35:33.596969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.051 [2024-10-13 14:35:33.596983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.051 [2024-10-13 14:35:33.596990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.051 [2024-10-13 14:35:33.596998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.051 [2024-10-13 14:35:33.597012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.051 qpair failed and we were unable to recover it. 00:39:30.051 [2024-10-13 14:35:33.606813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.051 [2024-10-13 14:35:33.606864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.051 [2024-10-13 14:35:33.606878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.051 [2024-10-13 14:35:33.606884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.051 [2024-10-13 14:35:33.606891] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.051 [2024-10-13 14:35:33.606904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.051 qpair failed and we were unable to recover it. 00:39:30.051 [2024-10-13 14:35:33.616918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.051 [2024-10-13 14:35:33.616967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.051 [2024-10-13 14:35:33.616981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.051 [2024-10-13 14:35:33.616988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.051 [2024-10-13 14:35:33.616994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.051 [2024-10-13 14:35:33.617011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.051 qpair failed and we were unable to recover it. 00:39:30.051 [2024-10-13 14:35:33.626942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.051 [2024-10-13 14:35:33.626981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.051 [2024-10-13 14:35:33.626994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.051 [2024-10-13 14:35:33.627001] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.051 [2024-10-13 14:35:33.627007] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.051 [2024-10-13 14:35:33.627021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.051 qpair failed and we were unable to recover it. 00:39:30.051 [2024-10-13 14:35:33.636873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.051 [2024-10-13 14:35:33.636913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.051 [2024-10-13 14:35:33.636926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.051 [2024-10-13 14:35:33.636933] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.051 [2024-10-13 14:35:33.636939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.051 [2024-10-13 14:35:33.636952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.051 qpair failed and we were unable to recover it. 00:39:30.051 [2024-10-13 14:35:33.646957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.051 [2024-10-13 14:35:33.647006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.052 [2024-10-13 14:35:33.647019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.052 [2024-10-13 14:35:33.647026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.052 [2024-10-13 14:35:33.647032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.052 [2024-10-13 14:35:33.647045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.052 qpair failed and we were unable to recover it. 00:39:30.052 [2024-10-13 14:35:33.656971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.052 [2024-10-13 14:35:33.657071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.052 [2024-10-13 14:35:33.657084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.052 [2024-10-13 14:35:33.657091] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.052 [2024-10-13 14:35:33.657103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.052 [2024-10-13 14:35:33.657118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.052 qpair failed and we were unable to recover it. 00:39:30.052 [2024-10-13 14:35:33.666962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.052 [2024-10-13 14:35:33.667007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.052 [2024-10-13 14:35:33.667020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.052 [2024-10-13 14:35:33.667027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.052 [2024-10-13 14:35:33.667033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.052 [2024-10-13 14:35:33.667046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.052 qpair failed and we were unable to recover it. 00:39:30.052 [2024-10-13 14:35:33.676969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.052 [2024-10-13 14:35:33.677014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.052 [2024-10-13 14:35:33.677027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.052 [2024-10-13 14:35:33.677034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.052 [2024-10-13 14:35:33.677040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.052 [2024-10-13 14:35:33.677054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.052 qpair failed and we were unable to recover it. 00:39:30.052 [2024-10-13 14:35:33.686970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.052 [2024-10-13 14:35:33.687014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.052 [2024-10-13 14:35:33.687027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.052 [2024-10-13 14:35:33.687034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.052 [2024-10-13 14:35:33.687040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.052 [2024-10-13 14:35:33.687054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.052 qpair failed and we were unable to recover it. 00:39:30.052 [2024-10-13 14:35:33.696973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.052 [2024-10-13 14:35:33.697023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.052 [2024-10-13 14:35:33.697036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.052 [2024-10-13 14:35:33.697043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.052 [2024-10-13 14:35:33.697049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.052 [2024-10-13 14:35:33.697065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.052 qpair failed and we were unable to recover it. 00:39:30.052 [2024-10-13 14:35:33.706973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.052 [2024-10-13 14:35:33.707020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.052 [2024-10-13 14:35:33.707033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.052 [2024-10-13 14:35:33.707040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.052 [2024-10-13 14:35:33.707046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.052 [2024-10-13 14:35:33.707060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.052 qpair failed and we were unable to recover it. 00:39:30.052 [2024-10-13 14:35:33.716978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.052 [2024-10-13 14:35:33.717019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.052 [2024-10-13 14:35:33.717032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.052 [2024-10-13 14:35:33.717039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.052 [2024-10-13 14:35:33.717045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.052 [2024-10-13 14:35:33.717058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.052 qpair failed and we were unable to recover it. 00:39:30.052 [2024-10-13 14:35:33.726982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.052 [2024-10-13 14:35:33.727029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.052 [2024-10-13 14:35:33.727043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.052 [2024-10-13 14:35:33.727050] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.052 [2024-10-13 14:35:33.727056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.052 [2024-10-13 14:35:33.727074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.052 qpair failed and we were unable to recover it. 00:39:30.052 [2024-10-13 14:35:33.736964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.052 [2024-10-13 14:35:33.737012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.052 [2024-10-13 14:35:33.737025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.052 [2024-10-13 14:35:33.737032] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.052 [2024-10-13 14:35:33.737039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.052 [2024-10-13 14:35:33.737052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.052 qpair failed and we were unable to recover it. 00:39:30.315 [2024-10-13 14:35:33.747044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.315 [2024-10-13 14:35:33.747096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.315 [2024-10-13 14:35:33.747109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.315 [2024-10-13 14:35:33.747119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.315 [2024-10-13 14:35:33.747125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.315 [2024-10-13 14:35:33.747139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.315 qpair failed and we were unable to recover it. 00:39:30.315 [2024-10-13 14:35:33.757001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.315 [2024-10-13 14:35:33.757047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.315 [2024-10-13 14:35:33.757061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.315 [2024-10-13 14:35:33.757073] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.315 [2024-10-13 14:35:33.757079] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.315 [2024-10-13 14:35:33.757093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.315 qpair failed and we were unable to recover it. 00:39:30.315 [2024-10-13 14:35:33.767008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.315 [2024-10-13 14:35:33.767055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.315 [2024-10-13 14:35:33.767071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.315 [2024-10-13 14:35:33.767078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.315 [2024-10-13 14:35:33.767084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.315 [2024-10-13 14:35:33.767098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.315 qpair failed and we were unable to recover it. 00:39:30.315 [2024-10-13 14:35:33.777024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.315 [2024-10-13 14:35:33.777076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.315 [2024-10-13 14:35:33.777089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.315 [2024-10-13 14:35:33.777096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.315 [2024-10-13 14:35:33.777102] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.315 [2024-10-13 14:35:33.777116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.315 qpair failed and we were unable to recover it. 00:39:30.315 [2024-10-13 14:35:33.787011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.315 [2024-10-13 14:35:33.787060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.315 [2024-10-13 14:35:33.787075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.315 [2024-10-13 14:35:33.787082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.315 [2024-10-13 14:35:33.787089] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.315 [2024-10-13 14:35:33.787102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.315 qpair failed and we were unable to recover it. 00:39:30.315 [2024-10-13 14:35:33.796984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.315 [2024-10-13 14:35:33.797027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.315 [2024-10-13 14:35:33.797040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.315 [2024-10-13 14:35:33.797047] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.315 [2024-10-13 14:35:33.797053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.315 [2024-10-13 14:35:33.797070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.315 qpair failed and we were unable to recover it. 00:39:30.315 [2024-10-13 14:35:33.807035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.315 [2024-10-13 14:35:33.807088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.315 [2024-10-13 14:35:33.807101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.315 [2024-10-13 14:35:33.807108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.315 [2024-10-13 14:35:33.807114] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.315 [2024-10-13 14:35:33.807128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.315 qpair failed and we were unable to recover it. 00:39:30.315 [2024-10-13 14:35:33.817027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.315 [2024-10-13 14:35:33.817085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.315 [2024-10-13 14:35:33.817101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.315 [2024-10-13 14:35:33.817110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.315 [2024-10-13 14:35:33.817119] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.315 [2024-10-13 14:35:33.817137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.315 qpair failed and we were unable to recover it. 00:39:30.315 [2024-10-13 14:35:33.827013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.315 [2024-10-13 14:35:33.827057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.315 [2024-10-13 14:35:33.827074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.315 [2024-10-13 14:35:33.827081] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.315 [2024-10-13 14:35:33.827087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.315 [2024-10-13 14:35:33.827101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.315 qpair failed and we were unable to recover it. 00:39:30.316 [2024-10-13 14:35:33.837023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.316 [2024-10-13 14:35:33.837067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.316 [2024-10-13 14:35:33.837080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.316 [2024-10-13 14:35:33.837090] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.316 [2024-10-13 14:35:33.837096] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.316 [2024-10-13 14:35:33.837110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.316 qpair failed and we were unable to recover it. 00:39:30.316 [2024-10-13 14:35:33.847034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.316 [2024-10-13 14:35:33.847083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.316 [2024-10-13 14:35:33.847095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.316 [2024-10-13 14:35:33.847102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.316 [2024-10-13 14:35:33.847108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.316 [2024-10-13 14:35:33.847122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.316 qpair failed and we were unable to recover it. 00:39:30.316 [2024-10-13 14:35:33.857040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.316 [2024-10-13 14:35:33.857090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.316 [2024-10-13 14:35:33.857103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.316 [2024-10-13 14:35:33.857110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.316 [2024-10-13 14:35:33.857116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.316 [2024-10-13 14:35:33.857129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.316 qpair failed and we were unable to recover it. 00:39:30.316 [2024-10-13 14:35:33.867050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.316 [2024-10-13 14:35:33.867094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.316 [2024-10-13 14:35:33.867108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.316 [2024-10-13 14:35:33.867114] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.316 [2024-10-13 14:35:33.867121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.316 [2024-10-13 14:35:33.867134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.316 qpair failed and we were unable to recover it. 00:39:30.316 [2024-10-13 14:35:33.877035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.316 [2024-10-13 14:35:33.877078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.316 [2024-10-13 14:35:33.877091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.316 [2024-10-13 14:35:33.877098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.316 [2024-10-13 14:35:33.877104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.316 [2024-10-13 14:35:33.877118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.316 qpair failed and we were unable to recover it. 00:39:30.316 [2024-10-13 14:35:33.887037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.316 [2024-10-13 14:35:33.887087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.316 [2024-10-13 14:35:33.887100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.316 [2024-10-13 14:35:33.887106] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.316 [2024-10-13 14:35:33.887113] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.316 [2024-10-13 14:35:33.887126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.316 qpair failed and we were unable to recover it. 00:39:30.316 [2024-10-13 14:35:33.897054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.316 [2024-10-13 14:35:33.897120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.316 [2024-10-13 14:35:33.897133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.316 [2024-10-13 14:35:33.897140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.316 [2024-10-13 14:35:33.897146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.316 [2024-10-13 14:35:33.897160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.316 qpair failed and we were unable to recover it. 00:39:30.316 [2024-10-13 14:35:33.907098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.316 [2024-10-13 14:35:33.907169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.316 [2024-10-13 14:35:33.907182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.316 [2024-10-13 14:35:33.907189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.316 [2024-10-13 14:35:33.907195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.316 [2024-10-13 14:35:33.907209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.316 qpair failed and we were unable to recover it. 00:39:30.316 [2024-10-13 14:35:33.917050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.316 [2024-10-13 14:35:33.917096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.316 [2024-10-13 14:35:33.917109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.316 [2024-10-13 14:35:33.917116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.316 [2024-10-13 14:35:33.917122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.316 [2024-10-13 14:35:33.917135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.316 qpair failed and we were unable to recover it. 00:39:30.316 [2024-10-13 14:35:33.927038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.316 [2024-10-13 14:35:33.927099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.316 [2024-10-13 14:35:33.927115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.316 [2024-10-13 14:35:33.927122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.316 [2024-10-13 14:35:33.927128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.316 [2024-10-13 14:35:33.927142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.316 qpair failed and we were unable to recover it. 00:39:30.316 [2024-10-13 14:35:33.937034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.316 [2024-10-13 14:35:33.937085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.316 [2024-10-13 14:35:33.937098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.316 [2024-10-13 14:35:33.937105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.316 [2024-10-13 14:35:33.937111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.316 [2024-10-13 14:35:33.937124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.316 qpair failed and we were unable to recover it. 00:39:30.316 [2024-10-13 14:35:33.947073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.316 [2024-10-13 14:35:33.947127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.316 [2024-10-13 14:35:33.947140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.316 [2024-10-13 14:35:33.947147] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.316 [2024-10-13 14:35:33.947153] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.316 [2024-10-13 14:35:33.947167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.316 qpair failed and we were unable to recover it. 00:39:30.316 [2024-10-13 14:35:33.957069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.316 [2024-10-13 14:35:33.957115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.316 [2024-10-13 14:35:33.957128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.316 [2024-10-13 14:35:33.957135] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.316 [2024-10-13 14:35:33.957141] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.316 [2024-10-13 14:35:33.957154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.316 qpair failed and we were unable to recover it. 00:39:30.316 [2024-10-13 14:35:33.967046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.316 [2024-10-13 14:35:33.967092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.316 [2024-10-13 14:35:33.967113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.316 [2024-10-13 14:35:33.967120] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.316 [2024-10-13 14:35:33.967126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.316 [2024-10-13 14:35:33.967144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.316 qpair failed and we were unable to recover it. 00:39:30.316 [2024-10-13 14:35:33.977131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.316 [2024-10-13 14:35:33.977174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.316 [2024-10-13 14:35:33.977187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.316 [2024-10-13 14:35:33.977194] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.316 [2024-10-13 14:35:33.977200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.316 [2024-10-13 14:35:33.977214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.316 qpair failed and we were unable to recover it. 00:39:30.316 [2024-10-13 14:35:33.986978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.316 [2024-10-13 14:35:33.987026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.316 [2024-10-13 14:35:33.987040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.316 [2024-10-13 14:35:33.987046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.316 [2024-10-13 14:35:33.987053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.316 [2024-10-13 14:35:33.987071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.316 qpair failed and we were unable to recover it. 00:39:30.316 [2024-10-13 14:35:33.996960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.316 [2024-10-13 14:35:33.997001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.316 [2024-10-13 14:35:33.997014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.316 [2024-10-13 14:35:33.997021] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.316 [2024-10-13 14:35:33.997027] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.316 [2024-10-13 14:35:33.997041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.316 qpair failed and we were unable to recover it. 00:39:30.316 [2024-10-13 14:35:34.007105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.316 [2024-10-13 14:35:34.007152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.316 [2024-10-13 14:35:34.007165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.316 [2024-10-13 14:35:34.007172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.316 [2024-10-13 14:35:34.007178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.316 [2024-10-13 14:35:34.007192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.316 qpair failed and we were unable to recover it. 00:39:30.316 [2024-10-13 14:35:34.017125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.316 [2024-10-13 14:35:34.017168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.316 [2024-10-13 14:35:34.017184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.316 [2024-10-13 14:35:34.017191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.316 [2024-10-13 14:35:34.017197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.316 [2024-10-13 14:35:34.017211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.316 qpair failed and we were unable to recover it. 00:39:30.577 [2024-10-13 14:35:34.026977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.577 [2024-10-13 14:35:34.027023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.577 [2024-10-13 14:35:34.027036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.577 [2024-10-13 14:35:34.027042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.577 [2024-10-13 14:35:34.027049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.577 [2024-10-13 14:35:34.027065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.577 qpair failed and we were unable to recover it. 00:39:30.577 [2024-10-13 14:35:34.037108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.577 [2024-10-13 14:35:34.037151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.577 [2024-10-13 14:35:34.037164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.577 [2024-10-13 14:35:34.037171] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.577 [2024-10-13 14:35:34.037177] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.577 [2024-10-13 14:35:34.037190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.577 qpair failed and we were unable to recover it. 00:39:30.577 [2024-10-13 14:35:34.047096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.577 [2024-10-13 14:35:34.047139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.577 [2024-10-13 14:35:34.047152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.577 [2024-10-13 14:35:34.047158] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.577 [2024-10-13 14:35:34.047164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.577 [2024-10-13 14:35:34.047178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.577 qpair failed and we were unable to recover it. 00:39:30.577 [2024-10-13 14:35:34.057133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.577 [2024-10-13 14:35:34.057181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.577 [2024-10-13 14:35:34.057194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.577 [2024-10-13 14:35:34.057200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.577 [2024-10-13 14:35:34.057206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.577 [2024-10-13 14:35:34.057223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.577 qpair failed and we were unable to recover it. 00:39:30.577 [2024-10-13 14:35:34.067129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.578 [2024-10-13 14:35:34.067173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.578 [2024-10-13 14:35:34.067186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.578 [2024-10-13 14:35:34.067192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.578 [2024-10-13 14:35:34.067199] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.578 [2024-10-13 14:35:34.067212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.578 qpair failed and we were unable to recover it. 00:39:30.578 [2024-10-13 14:35:34.077128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.578 [2024-10-13 14:35:34.077171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.578 [2024-10-13 14:35:34.077184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.578 [2024-10-13 14:35:34.077191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.578 [2024-10-13 14:35:34.077197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.578 [2024-10-13 14:35:34.077210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.578 qpair failed and we were unable to recover it. 00:39:30.578 [2024-10-13 14:35:34.087157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.578 [2024-10-13 14:35:34.087229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.578 [2024-10-13 14:35:34.087242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.578 [2024-10-13 14:35:34.087249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.578 [2024-10-13 14:35:34.087255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.578 [2024-10-13 14:35:34.087268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.578 qpair failed and we were unable to recover it. 00:39:30.578 [2024-10-13 14:35:34.097021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.578 [2024-10-13 14:35:34.097121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.578 [2024-10-13 14:35:34.097134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.578 [2024-10-13 14:35:34.097142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.578 [2024-10-13 14:35:34.097149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.578 [2024-10-13 14:35:34.097163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.578 qpair failed and we were unable to recover it. 00:39:30.578 [2024-10-13 14:35:34.107142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.578 [2024-10-13 14:35:34.107228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.578 [2024-10-13 14:35:34.107243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.578 [2024-10-13 14:35:34.107250] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.578 [2024-10-13 14:35:34.107256] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.578 [2024-10-13 14:35:34.107270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.578 qpair failed and we were unable to recover it. 00:39:30.578 [2024-10-13 14:35:34.117149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.578 [2024-10-13 14:35:34.117193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.578 [2024-10-13 14:35:34.117206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.578 [2024-10-13 14:35:34.117213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.578 [2024-10-13 14:35:34.117219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.578 [2024-10-13 14:35:34.117233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.578 qpair failed and we were unable to recover it. 00:39:30.578 [2024-10-13 14:35:34.127149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.578 [2024-10-13 14:35:34.127193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.578 [2024-10-13 14:35:34.127206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.578 [2024-10-13 14:35:34.127213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.578 [2024-10-13 14:35:34.127219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.578 [2024-10-13 14:35:34.127232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.578 qpair failed and we were unable to recover it. 00:39:30.578 [2024-10-13 14:35:34.137070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.578 [2024-10-13 14:35:34.137159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.578 [2024-10-13 14:35:34.137172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.578 [2024-10-13 14:35:34.137180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.578 [2024-10-13 14:35:34.137186] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.578 [2024-10-13 14:35:34.137199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.578 qpair failed and we were unable to recover it. 00:39:30.578 [2024-10-13 14:35:34.147145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.578 [2024-10-13 14:35:34.147188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.578 [2024-10-13 14:35:34.147200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.578 [2024-10-13 14:35:34.147207] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.578 [2024-10-13 14:35:34.147216] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.578 [2024-10-13 14:35:34.147230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.578 qpair failed and we were unable to recover it. 00:39:30.578 [2024-10-13 14:35:34.157033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.578 [2024-10-13 14:35:34.157079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.578 [2024-10-13 14:35:34.157093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.578 [2024-10-13 14:35:34.157099] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.578 [2024-10-13 14:35:34.157105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.578 [2024-10-13 14:35:34.157125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.578 qpair failed and we were unable to recover it. 00:39:30.578 [2024-10-13 14:35:34.167175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.578 [2024-10-13 14:35:34.167221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.578 [2024-10-13 14:35:34.167234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.578 [2024-10-13 14:35:34.167241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.578 [2024-10-13 14:35:34.167247] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.578 [2024-10-13 14:35:34.167261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.578 qpair failed and we were unable to recover it. 00:39:30.578 [2024-10-13 14:35:34.177194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.578 [2024-10-13 14:35:34.177243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.578 [2024-10-13 14:35:34.177256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.578 [2024-10-13 14:35:34.177263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.578 [2024-10-13 14:35:34.177269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.578 [2024-10-13 14:35:34.177282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.578 qpair failed and we were unable to recover it. 00:39:30.578 [2024-10-13 14:35:34.187158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.578 [2024-10-13 14:35:34.187236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.578 [2024-10-13 14:35:34.187248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.578 [2024-10-13 14:35:34.187255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.578 [2024-10-13 14:35:34.187261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.578 [2024-10-13 14:35:34.187275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.578 qpair failed and we were unable to recover it. 00:39:30.578 [2024-10-13 14:35:34.197055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.578 [2024-10-13 14:35:34.197104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.578 [2024-10-13 14:35:34.197117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.578 [2024-10-13 14:35:34.197124] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.578 [2024-10-13 14:35:34.197130] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.578 [2024-10-13 14:35:34.197144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.578 qpair failed and we were unable to recover it. 00:39:30.578 [2024-10-13 14:35:34.207168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.578 [2024-10-13 14:35:34.207215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.578 [2024-10-13 14:35:34.207236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.579 [2024-10-13 14:35:34.207244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.579 [2024-10-13 14:35:34.207251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.579 [2024-10-13 14:35:34.207269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.579 qpair failed and we were unable to recover it. 00:39:30.579 [2024-10-13 14:35:34.217195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.579 [2024-10-13 14:35:34.217243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.579 [2024-10-13 14:35:34.217257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.579 [2024-10-13 14:35:34.217263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.579 [2024-10-13 14:35:34.217270] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.579 [2024-10-13 14:35:34.217283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.579 qpair failed and we were unable to recover it. 00:39:30.579 [2024-10-13 14:35:34.227159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.579 [2024-10-13 14:35:34.227202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.579 [2024-10-13 14:35:34.227215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.579 [2024-10-13 14:35:34.227221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.579 [2024-10-13 14:35:34.227228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.579 [2024-10-13 14:35:34.227241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.579 qpair failed and we were unable to recover it. 00:39:30.579 [2024-10-13 14:35:34.237153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.579 [2024-10-13 14:35:34.237193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.579 [2024-10-13 14:35:34.237206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.579 [2024-10-13 14:35:34.237213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.579 [2024-10-13 14:35:34.237222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.579 [2024-10-13 14:35:34.237236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.579 qpair failed and we were unable to recover it. 00:39:30.579 [2024-10-13 14:35:34.247130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.579 [2024-10-13 14:35:34.247206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.579 [2024-10-13 14:35:34.247219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.579 [2024-10-13 14:35:34.247225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.579 [2024-10-13 14:35:34.247231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.579 [2024-10-13 14:35:34.247245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.579 qpair failed and we were unable to recover it. 00:39:30.579 [2024-10-13 14:35:34.257223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.579 [2024-10-13 14:35:34.257267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.579 [2024-10-13 14:35:34.257280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.579 [2024-10-13 14:35:34.257287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.579 [2024-10-13 14:35:34.257293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.579 [2024-10-13 14:35:34.257307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.579 qpair failed and we were unable to recover it. 00:39:30.579 [2024-10-13 14:35:34.267200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.579 [2024-10-13 14:35:34.267250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.579 [2024-10-13 14:35:34.267263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.579 [2024-10-13 14:35:34.267270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.579 [2024-10-13 14:35:34.267276] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.579 [2024-10-13 14:35:34.267290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.579 qpair failed and we were unable to recover it. 00:39:30.579 [2024-10-13 14:35:34.277102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.579 [2024-10-13 14:35:34.277142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.579 [2024-10-13 14:35:34.277155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.579 [2024-10-13 14:35:34.277161] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.579 [2024-10-13 14:35:34.277167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.579 [2024-10-13 14:35:34.277181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.579 qpair failed and we were unable to recover it. 00:39:30.842 [2024-10-13 14:35:34.287220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.842 [2024-10-13 14:35:34.287290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.842 [2024-10-13 14:35:34.287303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.842 [2024-10-13 14:35:34.287310] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.842 [2024-10-13 14:35:34.287316] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.842 [2024-10-13 14:35:34.287329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.842 qpair failed and we were unable to recover it. 00:39:30.842 [2024-10-13 14:35:34.297237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.842 [2024-10-13 14:35:34.297284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.842 [2024-10-13 14:35:34.297297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.842 [2024-10-13 14:35:34.297303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.842 [2024-10-13 14:35:34.297309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.842 [2024-10-13 14:35:34.297323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.842 qpair failed and we were unable to recover it. 00:39:30.842 [2024-10-13 14:35:34.307185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.842 [2024-10-13 14:35:34.307230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.842 [2024-10-13 14:35:34.307242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.842 [2024-10-13 14:35:34.307249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.842 [2024-10-13 14:35:34.307255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.842 [2024-10-13 14:35:34.307269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.842 qpair failed and we were unable to recover it. 00:39:30.842 [2024-10-13 14:35:34.317199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.842 [2024-10-13 14:35:34.317244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.842 [2024-10-13 14:35:34.317257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.842 [2024-10-13 14:35:34.317264] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.842 [2024-10-13 14:35:34.317270] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.842 [2024-10-13 14:35:34.317283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.842 qpair failed and we were unable to recover it. 00:39:30.842 [2024-10-13 14:35:34.327093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.842 [2024-10-13 14:35:34.327140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.842 [2024-10-13 14:35:34.327153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.842 [2024-10-13 14:35:34.327163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.842 [2024-10-13 14:35:34.327170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.842 [2024-10-13 14:35:34.327183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.842 qpair failed and we were unable to recover it. 00:39:30.842 [2024-10-13 14:35:34.337248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.842 [2024-10-13 14:35:34.337294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.842 [2024-10-13 14:35:34.337307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.842 [2024-10-13 14:35:34.337313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.842 [2024-10-13 14:35:34.337319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.842 [2024-10-13 14:35:34.337333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.842 qpair failed and we were unable to recover it. 00:39:30.842 [2024-10-13 14:35:34.347218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.842 [2024-10-13 14:35:34.347260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.842 [2024-10-13 14:35:34.347273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.842 [2024-10-13 14:35:34.347279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.842 [2024-10-13 14:35:34.347286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.842 [2024-10-13 14:35:34.347299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.842 qpair failed and we were unable to recover it. 00:39:30.842 [2024-10-13 14:35:34.357115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.842 [2024-10-13 14:35:34.357176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.842 [2024-10-13 14:35:34.357189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.842 [2024-10-13 14:35:34.357195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.842 [2024-10-13 14:35:34.357201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.842 [2024-10-13 14:35:34.357215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.842 qpair failed and we were unable to recover it. 00:39:30.842 [2024-10-13 14:35:34.367256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.842 [2024-10-13 14:35:34.367301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.842 [2024-10-13 14:35:34.367314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.842 [2024-10-13 14:35:34.367321] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.842 [2024-10-13 14:35:34.367327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.842 [2024-10-13 14:35:34.367340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.842 qpair failed and we were unable to recover it. 00:39:30.842 [2024-10-13 14:35:34.377149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.842 [2024-10-13 14:35:34.377199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.842 [2024-10-13 14:35:34.377212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.842 [2024-10-13 14:35:34.377219] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.842 [2024-10-13 14:35:34.377225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.842 [2024-10-13 14:35:34.377238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.842 qpair failed and we were unable to recover it. 00:39:30.842 [2024-10-13 14:35:34.387232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.842 [2024-10-13 14:35:34.387278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.842 [2024-10-13 14:35:34.387291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.842 [2024-10-13 14:35:34.387298] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.842 [2024-10-13 14:35:34.387304] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.842 [2024-10-13 14:35:34.387317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.842 qpair failed and we were unable to recover it. 00:39:30.842 [2024-10-13 14:35:34.397283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.842 [2024-10-13 14:35:34.397327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.842 [2024-10-13 14:35:34.397340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.843 [2024-10-13 14:35:34.397347] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.843 [2024-10-13 14:35:34.397353] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.843 [2024-10-13 14:35:34.397366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.843 qpair failed and we were unable to recover it. 00:39:30.843 [2024-10-13 14:35:34.407270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.843 [2024-10-13 14:35:34.407359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.843 [2024-10-13 14:35:34.407372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.843 [2024-10-13 14:35:34.407378] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.843 [2024-10-13 14:35:34.407385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.843 [2024-10-13 14:35:34.407398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.843 qpair failed and we were unable to recover it. 00:39:30.843 [2024-10-13 14:35:34.417239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.843 [2024-10-13 14:35:34.417286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.843 [2024-10-13 14:35:34.417306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.843 [2024-10-13 14:35:34.417312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.843 [2024-10-13 14:35:34.417318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.843 [2024-10-13 14:35:34.417332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.843 qpair failed and we were unable to recover it. 00:39:30.843 [2024-10-13 14:35:34.427213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.843 [2024-10-13 14:35:34.427257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.843 [2024-10-13 14:35:34.427270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.843 [2024-10-13 14:35:34.427277] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.843 [2024-10-13 14:35:34.427283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.843 [2024-10-13 14:35:34.427296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.843 qpair failed and we were unable to recover it. 00:39:30.843 [2024-10-13 14:35:34.437279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.843 [2024-10-13 14:35:34.437320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.843 [2024-10-13 14:35:34.437333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.843 [2024-10-13 14:35:34.437339] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.843 [2024-10-13 14:35:34.437345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.843 [2024-10-13 14:35:34.437359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.843 qpair failed and we were unable to recover it. 00:39:30.843 [2024-10-13 14:35:34.447251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.843 [2024-10-13 14:35:34.447322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.843 [2024-10-13 14:35:34.447334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.843 [2024-10-13 14:35:34.447341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.843 [2024-10-13 14:35:34.447347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.843 [2024-10-13 14:35:34.447360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.843 qpair failed and we were unable to recover it. 00:39:30.843 [2024-10-13 14:35:34.457248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.843 [2024-10-13 14:35:34.457298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.843 [2024-10-13 14:35:34.457311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.843 [2024-10-13 14:35:34.457318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.843 [2024-10-13 14:35:34.457324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.843 [2024-10-13 14:35:34.457340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.843 qpair failed and we were unable to recover it. 00:39:30.843 [2024-10-13 14:35:34.467278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.843 [2024-10-13 14:35:34.467322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.843 [2024-10-13 14:35:34.467335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.843 [2024-10-13 14:35:34.467342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.843 [2024-10-13 14:35:34.467348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.843 [2024-10-13 14:35:34.467361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.843 qpair failed and we were unable to recover it. 00:39:30.843 [2024-10-13 14:35:34.477141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.843 [2024-10-13 14:35:34.477185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.843 [2024-10-13 14:35:34.477197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.843 [2024-10-13 14:35:34.477204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.843 [2024-10-13 14:35:34.477211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.843 [2024-10-13 14:35:34.477224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.843 qpair failed and we were unable to recover it. 00:39:30.843 [2024-10-13 14:35:34.487297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.843 [2024-10-13 14:35:34.487352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.843 [2024-10-13 14:35:34.487365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.843 [2024-10-13 14:35:34.487372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.843 [2024-10-13 14:35:34.487378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.843 [2024-10-13 14:35:34.487391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.843 qpair failed and we were unable to recover it. 00:39:30.843 [2024-10-13 14:35:34.497304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.843 [2024-10-13 14:35:34.497348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.843 [2024-10-13 14:35:34.497361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.843 [2024-10-13 14:35:34.497368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.843 [2024-10-13 14:35:34.497374] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.843 [2024-10-13 14:35:34.497387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.843 qpair failed and we were unable to recover it. 00:39:30.843 [2024-10-13 14:35:34.507279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.843 [2024-10-13 14:35:34.507332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.843 [2024-10-13 14:35:34.507348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.843 [2024-10-13 14:35:34.507355] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.843 [2024-10-13 14:35:34.507361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.843 [2024-10-13 14:35:34.507374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.843 qpair failed and we were unable to recover it. 00:39:30.843 [2024-10-13 14:35:34.517247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.843 [2024-10-13 14:35:34.517292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.843 [2024-10-13 14:35:34.517305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.843 [2024-10-13 14:35:34.517312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.843 [2024-10-13 14:35:34.517318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.843 [2024-10-13 14:35:34.517331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.843 qpair failed and we were unable to recover it. 00:39:30.843 [2024-10-13 14:35:34.527301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.843 [2024-10-13 14:35:34.527347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.843 [2024-10-13 14:35:34.527360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.843 [2024-10-13 14:35:34.527367] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.843 [2024-10-13 14:35:34.527373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.843 [2024-10-13 14:35:34.527386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.843 qpair failed and we were unable to recover it. 00:39:30.843 [2024-10-13 14:35:34.537299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:30.843 [2024-10-13 14:35:34.537347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:30.843 [2024-10-13 14:35:34.537360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:30.843 [2024-10-13 14:35:34.537367] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:30.843 [2024-10-13 14:35:34.537373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:30.844 [2024-10-13 14:35:34.537386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:30.844 qpair failed and we were unable to recover it. 00:39:31.106 [2024-10-13 14:35:34.547302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.106 [2024-10-13 14:35:34.547344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.106 [2024-10-13 14:35:34.547357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.106 [2024-10-13 14:35:34.547364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.106 [2024-10-13 14:35:34.547370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.106 [2024-10-13 14:35:34.547387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.106 qpair failed and we were unable to recover it. 00:39:31.106 [2024-10-13 14:35:34.557199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.106 [2024-10-13 14:35:34.557246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.106 [2024-10-13 14:35:34.557259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.106 [2024-10-13 14:35:34.557267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.106 [2024-10-13 14:35:34.557273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.106 [2024-10-13 14:35:34.557287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.106 qpair failed and we were unable to recover it. 00:39:31.106 [2024-10-13 14:35:34.567211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.106 [2024-10-13 14:35:34.567268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.106 [2024-10-13 14:35:34.567281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.106 [2024-10-13 14:35:34.567289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.106 [2024-10-13 14:35:34.567295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.106 [2024-10-13 14:35:34.567310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.106 qpair failed and we were unable to recover it. 00:39:31.106 [2024-10-13 14:35:34.577353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.106 [2024-10-13 14:35:34.577400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.106 [2024-10-13 14:35:34.577413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.106 [2024-10-13 14:35:34.577420] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.106 [2024-10-13 14:35:34.577426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.106 [2024-10-13 14:35:34.577439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.106 qpair failed and we were unable to recover it. 00:39:31.106 [2024-10-13 14:35:34.587299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.106 [2024-10-13 14:35:34.587343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.106 [2024-10-13 14:35:34.587356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.106 [2024-10-13 14:35:34.587362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.106 [2024-10-13 14:35:34.587369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.106 [2024-10-13 14:35:34.587383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.106 qpair failed and we were unable to recover it. 00:39:31.106 [2024-10-13 14:35:34.597312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.106 [2024-10-13 14:35:34.597357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.106 [2024-10-13 14:35:34.597373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.106 [2024-10-13 14:35:34.597381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.106 [2024-10-13 14:35:34.597388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.106 [2024-10-13 14:35:34.597402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.106 qpair failed and we were unable to recover it. 00:39:31.106 [2024-10-13 14:35:34.607339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.106 [2024-10-13 14:35:34.607382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.106 [2024-10-13 14:35:34.607394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.106 [2024-10-13 14:35:34.607401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.106 [2024-10-13 14:35:34.607407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.106 [2024-10-13 14:35:34.607421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.107 qpair failed and we were unable to recover it. 00:39:31.107 [2024-10-13 14:35:34.617401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.107 [2024-10-13 14:35:34.617447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.107 [2024-10-13 14:35:34.617459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.107 [2024-10-13 14:35:34.617466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.107 [2024-10-13 14:35:34.617472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.107 [2024-10-13 14:35:34.617485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.107 qpair failed and we were unable to recover it. 00:39:31.107 [2024-10-13 14:35:34.627233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.107 [2024-10-13 14:35:34.627297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.107 [2024-10-13 14:35:34.627310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.107 [2024-10-13 14:35:34.627317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.107 [2024-10-13 14:35:34.627323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.107 [2024-10-13 14:35:34.627336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.107 qpair failed and we were unable to recover it. 00:39:31.107 [2024-10-13 14:35:34.637351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.107 [2024-10-13 14:35:34.637434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.107 [2024-10-13 14:35:34.637449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.107 [2024-10-13 14:35:34.637458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.107 [2024-10-13 14:35:34.637470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.107 [2024-10-13 14:35:34.637486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.107 qpair failed and we were unable to recover it. 00:39:31.107 [2024-10-13 14:35:34.647208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.107 [2024-10-13 14:35:34.647252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.107 [2024-10-13 14:35:34.647265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.107 [2024-10-13 14:35:34.647272] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.107 [2024-10-13 14:35:34.647278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.107 [2024-10-13 14:35:34.647292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.107 qpair failed and we were unable to recover it. 00:39:31.107 [2024-10-13 14:35:34.657342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.107 [2024-10-13 14:35:34.657389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.107 [2024-10-13 14:35:34.657402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.107 [2024-10-13 14:35:34.657409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.107 [2024-10-13 14:35:34.657415] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.107 [2024-10-13 14:35:34.657429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.107 qpair failed and we were unable to recover it. 00:39:31.107 [2024-10-13 14:35:34.667325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.107 [2024-10-13 14:35:34.667411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.107 [2024-10-13 14:35:34.667425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.107 [2024-10-13 14:35:34.667431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.107 [2024-10-13 14:35:34.667437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.107 [2024-10-13 14:35:34.667454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.107 qpair failed and we were unable to recover it. 00:39:31.107 [2024-10-13 14:35:34.677393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.107 [2024-10-13 14:35:34.677469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.107 [2024-10-13 14:35:34.677483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.107 [2024-10-13 14:35:34.677490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.107 [2024-10-13 14:35:34.677496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.107 [2024-10-13 14:35:34.677513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.107 qpair failed and we were unable to recover it. 00:39:31.107 [2024-10-13 14:35:34.687332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.107 [2024-10-13 14:35:34.687381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.107 [2024-10-13 14:35:34.687394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.107 [2024-10-13 14:35:34.687401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.107 [2024-10-13 14:35:34.687407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.107 [2024-10-13 14:35:34.687421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.107 qpair failed and we were unable to recover it. 00:39:31.107 [2024-10-13 14:35:34.697339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.107 [2024-10-13 14:35:34.697387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.107 [2024-10-13 14:35:34.697401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.107 [2024-10-13 14:35:34.697407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.107 [2024-10-13 14:35:34.697413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.107 [2024-10-13 14:35:34.697427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.107 qpair failed and we were unable to recover it. 00:39:31.107 [2024-10-13 14:35:34.707369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.107 [2024-10-13 14:35:34.707412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.107 [2024-10-13 14:35:34.707425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.107 [2024-10-13 14:35:34.707432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.107 [2024-10-13 14:35:34.707438] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.107 [2024-10-13 14:35:34.707451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.107 qpair failed and we were unable to recover it. 00:39:31.107 [2024-10-13 14:35:34.717362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.107 [2024-10-13 14:35:34.717407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.107 [2024-10-13 14:35:34.717420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.107 [2024-10-13 14:35:34.717427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.107 [2024-10-13 14:35:34.717434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.107 [2024-10-13 14:35:34.717447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.107 qpair failed and we were unable to recover it. 00:39:31.108 [2024-10-13 14:35:34.727385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.108 [2024-10-13 14:35:34.727462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.108 [2024-10-13 14:35:34.727475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.108 [2024-10-13 14:35:34.727482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.108 [2024-10-13 14:35:34.727491] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.108 [2024-10-13 14:35:34.727505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.108 qpair failed and we were unable to recover it. 00:39:31.108 [2024-10-13 14:35:34.737407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.108 [2024-10-13 14:35:34.737453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.108 [2024-10-13 14:35:34.737466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.108 [2024-10-13 14:35:34.737473] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.108 [2024-10-13 14:35:34.737479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.108 [2024-10-13 14:35:34.737492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.108 qpair failed and we were unable to recover it. 00:39:31.108 [2024-10-13 14:35:34.747375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.108 [2024-10-13 14:35:34.747419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.108 [2024-10-13 14:35:34.747432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.108 [2024-10-13 14:35:34.747438] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.108 [2024-10-13 14:35:34.747445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.108 [2024-10-13 14:35:34.747458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.108 qpair failed and we were unable to recover it. 00:39:31.108 [2024-10-13 14:35:34.757360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.108 [2024-10-13 14:35:34.757414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.108 [2024-10-13 14:35:34.757426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.108 [2024-10-13 14:35:34.757433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.108 [2024-10-13 14:35:34.757440] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.108 [2024-10-13 14:35:34.757453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.108 qpair failed and we were unable to recover it. 00:39:31.108 [2024-10-13 14:35:34.767284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.108 [2024-10-13 14:35:34.767361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.108 [2024-10-13 14:35:34.767374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.108 [2024-10-13 14:35:34.767381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.108 [2024-10-13 14:35:34.767387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.108 [2024-10-13 14:35:34.767400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.108 qpair failed and we were unable to recover it. 00:39:31.108 [2024-10-13 14:35:34.777376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.108 [2024-10-13 14:35:34.777424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.108 [2024-10-13 14:35:34.777438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.108 [2024-10-13 14:35:34.777445] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.108 [2024-10-13 14:35:34.777451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.108 [2024-10-13 14:35:34.777464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.108 qpair failed and we were unable to recover it. 00:39:31.108 [2024-10-13 14:35:34.787374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.108 [2024-10-13 14:35:34.787414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.108 [2024-10-13 14:35:34.787427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.108 [2024-10-13 14:35:34.787434] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.108 [2024-10-13 14:35:34.787440] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.108 [2024-10-13 14:35:34.787453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.108 qpair failed and we were unable to recover it. 00:39:31.108 [2024-10-13 14:35:34.797396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.108 [2024-10-13 14:35:34.797441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.108 [2024-10-13 14:35:34.797454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.108 [2024-10-13 14:35:34.797460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.108 [2024-10-13 14:35:34.797466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.108 [2024-10-13 14:35:34.797480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.108 qpair failed and we were unable to recover it. 00:39:31.108 [2024-10-13 14:35:34.807379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.108 [2024-10-13 14:35:34.807427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.108 [2024-10-13 14:35:34.807439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.108 [2024-10-13 14:35:34.807446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.108 [2024-10-13 14:35:34.807452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.108 [2024-10-13 14:35:34.807465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.108 qpair failed and we were unable to recover it. 00:39:31.371 [2024-10-13 14:35:34.817414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.371 [2024-10-13 14:35:34.817513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.371 [2024-10-13 14:35:34.817526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.371 [2024-10-13 14:35:34.817536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.371 [2024-10-13 14:35:34.817543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.371 [2024-10-13 14:35:34.817556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.371 qpair failed and we were unable to recover it. 00:39:31.371 [2024-10-13 14:35:34.827399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.371 [2024-10-13 14:35:34.827442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.371 [2024-10-13 14:35:34.827454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.371 [2024-10-13 14:35:34.827461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.371 [2024-10-13 14:35:34.827467] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.371 [2024-10-13 14:35:34.827481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.371 qpair failed and we were unable to recover it. 00:39:31.371 [2024-10-13 14:35:34.837412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.371 [2024-10-13 14:35:34.837455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.371 [2024-10-13 14:35:34.837468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.371 [2024-10-13 14:35:34.837475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.371 [2024-10-13 14:35:34.837481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.371 [2024-10-13 14:35:34.837494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.371 qpair failed and we were unable to recover it. 00:39:31.371 [2024-10-13 14:35:34.847396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.371 [2024-10-13 14:35:34.847443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.371 [2024-10-13 14:35:34.847457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.371 [2024-10-13 14:35:34.847464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.371 [2024-10-13 14:35:34.847470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.371 [2024-10-13 14:35:34.847483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.371 qpair failed and we were unable to recover it. 00:39:31.371 [2024-10-13 14:35:34.857393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.371 [2024-10-13 14:35:34.857446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.371 [2024-10-13 14:35:34.857460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.371 [2024-10-13 14:35:34.857466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.371 [2024-10-13 14:35:34.857473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.371 [2024-10-13 14:35:34.857486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.371 qpair failed and we were unable to recover it. 00:39:31.371 [2024-10-13 14:35:34.867416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.371 [2024-10-13 14:35:34.867456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.371 [2024-10-13 14:35:34.867469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.371 [2024-10-13 14:35:34.867476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.372 [2024-10-13 14:35:34.867482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.372 [2024-10-13 14:35:34.867496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.372 qpair failed and we were unable to recover it. 00:39:31.372 [2024-10-13 14:35:34.877415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.372 [2024-10-13 14:35:34.877459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.372 [2024-10-13 14:35:34.877472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.372 [2024-10-13 14:35:34.877479] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.372 [2024-10-13 14:35:34.877485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.372 [2024-10-13 14:35:34.877498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.372 qpair failed and we were unable to recover it. 00:39:31.372 [2024-10-13 14:35:34.887279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.372 [2024-10-13 14:35:34.887325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.372 [2024-10-13 14:35:34.887338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.372 [2024-10-13 14:35:34.887345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.372 [2024-10-13 14:35:34.887351] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.372 [2024-10-13 14:35:34.887364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.372 qpair failed and we were unable to recover it. 00:39:31.372 [2024-10-13 14:35:34.897429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.372 [2024-10-13 14:35:34.897472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.372 [2024-10-13 14:35:34.897485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.372 [2024-10-13 14:35:34.897492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.372 [2024-10-13 14:35:34.897499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.372 [2024-10-13 14:35:34.897512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.372 qpair failed and we were unable to recover it. 00:39:31.372 [2024-10-13 14:35:34.907299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.372 [2024-10-13 14:35:34.907355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.372 [2024-10-13 14:35:34.907368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.372 [2024-10-13 14:35:34.907379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.372 [2024-10-13 14:35:34.907385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.372 [2024-10-13 14:35:34.907399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.372 qpair failed and we were unable to recover it. 00:39:31.372 [2024-10-13 14:35:34.917408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.372 [2024-10-13 14:35:34.917463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.372 [2024-10-13 14:35:34.917476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.372 [2024-10-13 14:35:34.917483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.372 [2024-10-13 14:35:34.917489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.372 [2024-10-13 14:35:34.917503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.372 qpair failed and we were unable to recover it. 00:39:31.372 [2024-10-13 14:35:34.927421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.372 [2024-10-13 14:35:34.927461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.372 [2024-10-13 14:35:34.927470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.372 [2024-10-13 14:35:34.927475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.372 [2024-10-13 14:35:34.927479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.372 [2024-10-13 14:35:34.927488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.372 qpair failed and we were unable to recover it. 00:39:31.372 [2024-10-13 14:35:34.937399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.372 [2024-10-13 14:35:34.937441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.372 [2024-10-13 14:35:34.937451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.372 [2024-10-13 14:35:34.937456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.372 [2024-10-13 14:35:34.937460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.372 [2024-10-13 14:35:34.937469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.372 qpair failed and we were unable to recover it. 00:39:31.372 [2024-10-13 14:35:34.947407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.372 [2024-10-13 14:35:34.947443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.372 [2024-10-13 14:35:34.947453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.372 [2024-10-13 14:35:34.947457] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.372 [2024-10-13 14:35:34.947461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.372 [2024-10-13 14:35:34.947471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.372 qpair failed and we were unable to recover it. 00:39:31.372 [2024-10-13 14:35:34.957427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.372 [2024-10-13 14:35:34.957464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.372 [2024-10-13 14:35:34.957473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.372 [2024-10-13 14:35:34.957478] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.372 [2024-10-13 14:35:34.957482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.372 [2024-10-13 14:35:34.957492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.372 qpair failed and we were unable to recover it. 00:39:31.372 [2024-10-13 14:35:34.967396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.372 [2024-10-13 14:35:34.967438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.372 [2024-10-13 14:35:34.967447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.372 [2024-10-13 14:35:34.967452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.372 [2024-10-13 14:35:34.967456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.372 [2024-10-13 14:35:34.967466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.372 qpair failed and we were unable to recover it. 00:39:31.372 [2024-10-13 14:35:34.977421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.372 [2024-10-13 14:35:34.977465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.372 [2024-10-13 14:35:34.977475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.372 [2024-10-13 14:35:34.977480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.372 [2024-10-13 14:35:34.977484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.372 [2024-10-13 14:35:34.977494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.372 qpair failed and we were unable to recover it. 00:39:31.372 [2024-10-13 14:35:34.987304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.372 [2024-10-13 14:35:34.987345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.372 [2024-10-13 14:35:34.987354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.372 [2024-10-13 14:35:34.987359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.372 [2024-10-13 14:35:34.987363] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.372 [2024-10-13 14:35:34.987372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.372 qpair failed and we were unable to recover it. 00:39:31.372 [2024-10-13 14:35:34.997448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.372 [2024-10-13 14:35:34.997490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.372 [2024-10-13 14:35:34.997502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.372 [2024-10-13 14:35:34.997507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.372 [2024-10-13 14:35:34.997511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.372 [2024-10-13 14:35:34.997521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.372 qpair failed and we were unable to recover it. 00:39:31.372 [2024-10-13 14:35:35.007317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.372 [2024-10-13 14:35:35.007359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.372 [2024-10-13 14:35:35.007369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.372 [2024-10-13 14:35:35.007374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.373 [2024-10-13 14:35:35.007378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.373 [2024-10-13 14:35:35.007388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.373 qpair failed and we were unable to recover it. 00:39:31.373 [2024-10-13 14:35:35.017460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.373 [2024-10-13 14:35:35.017527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.373 [2024-10-13 14:35:35.017537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.373 [2024-10-13 14:35:35.017541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.373 [2024-10-13 14:35:35.017546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.373 [2024-10-13 14:35:35.017555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.373 qpair failed and we were unable to recover it. 00:39:31.373 [2024-10-13 14:35:35.027427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.373 [2024-10-13 14:35:35.027467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.373 [2024-10-13 14:35:35.027476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.373 [2024-10-13 14:35:35.027481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.373 [2024-10-13 14:35:35.027485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.373 [2024-10-13 14:35:35.027494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.373 qpair failed and we were unable to recover it. 00:39:31.373 [2024-10-13 14:35:35.037467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.373 [2024-10-13 14:35:35.037504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.373 [2024-10-13 14:35:35.037514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.373 [2024-10-13 14:35:35.037518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.373 [2024-10-13 14:35:35.037523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.373 [2024-10-13 14:35:35.037535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.373 qpair failed and we were unable to recover it. 00:39:31.373 [2024-10-13 14:35:35.047464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.373 [2024-10-13 14:35:35.047506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.373 [2024-10-13 14:35:35.047516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.373 [2024-10-13 14:35:35.047521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.373 [2024-10-13 14:35:35.047525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.373 [2024-10-13 14:35:35.047534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.373 qpair failed and we were unable to recover it. 00:39:31.373 [2024-10-13 14:35:35.057468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.373 [2024-10-13 14:35:35.057510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.373 [2024-10-13 14:35:35.057519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.373 [2024-10-13 14:35:35.057524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.373 [2024-10-13 14:35:35.057529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.373 [2024-10-13 14:35:35.057538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.373 qpair failed and we were unable to recover it. 00:39:31.373 [2024-10-13 14:35:35.067473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.373 [2024-10-13 14:35:35.067514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.373 [2024-10-13 14:35:35.067523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.373 [2024-10-13 14:35:35.067528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.373 [2024-10-13 14:35:35.067532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.373 [2024-10-13 14:35:35.067541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.373 qpair failed and we were unable to recover it. 00:39:31.635 [2024-10-13 14:35:35.077392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.635 [2024-10-13 14:35:35.077434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.635 [2024-10-13 14:35:35.077445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.635 [2024-10-13 14:35:35.077450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.635 [2024-10-13 14:35:35.077454] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.635 [2024-10-13 14:35:35.077464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.635 qpair failed and we were unable to recover it. 00:39:31.635 [2024-10-13 14:35:35.087481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.635 [2024-10-13 14:35:35.087520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.635 [2024-10-13 14:35:35.087534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.635 [2024-10-13 14:35:35.087539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.635 [2024-10-13 14:35:35.087543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.635 [2024-10-13 14:35:35.087553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.635 qpair failed and we were unable to recover it. 00:39:31.635 [2024-10-13 14:35:35.097500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.635 [2024-10-13 14:35:35.097550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.635 [2024-10-13 14:35:35.097560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.635 [2024-10-13 14:35:35.097566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.635 [2024-10-13 14:35:35.097571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.635 [2024-10-13 14:35:35.097581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.635 qpair failed and we were unable to recover it. 00:39:31.635 [2024-10-13 14:35:35.107481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.635 [2024-10-13 14:35:35.107520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.635 [2024-10-13 14:35:35.107529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.635 [2024-10-13 14:35:35.107533] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.635 [2024-10-13 14:35:35.107538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.635 [2024-10-13 14:35:35.107547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.635 qpair failed and we were unable to recover it. 00:39:31.635 [2024-10-13 14:35:35.117491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.635 [2024-10-13 14:35:35.117530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.635 [2024-10-13 14:35:35.117540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.635 [2024-10-13 14:35:35.117545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.635 [2024-10-13 14:35:35.117549] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.635 [2024-10-13 14:35:35.117559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.635 qpair failed and we were unable to recover it. 00:39:31.635 [2024-10-13 14:35:35.127482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.635 [2024-10-13 14:35:35.127525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.635 [2024-10-13 14:35:35.127534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.635 [2024-10-13 14:35:35.127539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.635 [2024-10-13 14:35:35.127548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.635 [2024-10-13 14:35:35.127558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.635 qpair failed and we were unable to recover it. 00:39:31.635 [2024-10-13 14:35:35.137508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.635 [2024-10-13 14:35:35.137552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.635 [2024-10-13 14:35:35.137562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.635 [2024-10-13 14:35:35.137567] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.635 [2024-10-13 14:35:35.137571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.635 [2024-10-13 14:35:35.137580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.635 qpair failed and we were unable to recover it. 00:39:31.635 [2024-10-13 14:35:35.147461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.635 [2024-10-13 14:35:35.147500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.635 [2024-10-13 14:35:35.147510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.635 [2024-10-13 14:35:35.147514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.635 [2024-10-13 14:35:35.147519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.635 [2024-10-13 14:35:35.147528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.635 qpair failed and we were unable to recover it. 00:39:31.635 [2024-10-13 14:35:35.157497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.635 [2024-10-13 14:35:35.157536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.635 [2024-10-13 14:35:35.157546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.635 [2024-10-13 14:35:35.157551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.635 [2024-10-13 14:35:35.157555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.635 [2024-10-13 14:35:35.157565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.635 qpair failed and we were unable to recover it. 00:39:31.635 [2024-10-13 14:35:35.167517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.635 [2024-10-13 14:35:35.167590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.635 [2024-10-13 14:35:35.167599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.635 [2024-10-13 14:35:35.167604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.635 [2024-10-13 14:35:35.167608] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5538000b90 00:39:31.635 [2024-10-13 14:35:35.167618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:39:31.635 qpair failed and we were unable to recover it. 00:39:31.635 [2024-10-13 14:35:35.177515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.635 [2024-10-13 14:35:35.177607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.635 [2024-10-13 14:35:35.177671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.635 [2024-10-13 14:35:35.177695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.635 [2024-10-13 14:35:35.177716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23b4310 00:39:31.635 [2024-10-13 14:35:35.177768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:31.635 qpair failed and we were unable to recover it. 00:39:31.635 [2024-10-13 14:35:35.187408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.635 [2024-10-13 14:35:35.187482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.635 [2024-10-13 14:35:35.187513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.635 [2024-10-13 14:35:35.187529] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.635 [2024-10-13 14:35:35.187543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23b4310 00:39:31.635 [2024-10-13 14:35:35.187573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:39:31.635 qpair failed and we were unable to recover it. 00:39:31.635 [2024-10-13 14:35:35.197538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.635 [2024-10-13 14:35:35.197631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.635 [2024-10-13 14:35:35.197695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.635 [2024-10-13 14:35:35.197719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.635 [2024-10-13 14:35:35.197740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5540000b90 00:39:31.635 [2024-10-13 14:35:35.197793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:39:31.635 qpair failed and we were unable to recover it. 00:39:31.635 [2024-10-13 14:35:35.207503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.635 [2024-10-13 14:35:35.207588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.635 [2024-10-13 14:35:35.207634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.636 [2024-10-13 14:35:35.207652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.636 [2024-10-13 14:35:35.207667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5540000b90 00:39:31.636 [2024-10-13 14:35:35.207707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:39:31.636 qpair failed and we were unable to recover it. 00:39:31.636 [2024-10-13 14:35:35.217517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.636 [2024-10-13 14:35:35.217605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.636 [2024-10-13 14:35:35.217670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.636 [2024-10-13 14:35:35.217695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.636 [2024-10-13 14:35:35.217726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5534000b90 00:39:31.636 [2024-10-13 14:35:35.217778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.636 qpair failed and we were unable to recover it. 00:39:31.636 [2024-10-13 14:35:35.227410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:39:31.636 [2024-10-13 14:35:35.227517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:39:31.636 [2024-10-13 14:35:35.227544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:39:31.636 [2024-10-13 14:35:35.227558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:31.636 [2024-10-13 14:35:35.227571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5534000b90 00:39:31.636 [2024-10-13 14:35:35.227599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:39:31.636 qpair failed and we were unable to recover it. 00:39:31.636 [2024-10-13 14:35:35.227806] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:39:31.636 A controller has encountered a failure and is being reset. 00:39:31.636 [2024-10-13 14:35:35.227922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c22b0 (9): Bad file descriptor 00:39:31.897 Controller properly reset. 00:39:31.897 [2024-10-13 14:35:35.363759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a18b30 is same with the state(6) to be set 00:39:31.897 Initializing NVMe Controllers 00:39:31.897 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:31.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:31.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:39:31.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:39:31.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:39:31.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:39:31.897 Initialization complete. Launching workers. 00:39:31.897 Starting thread on core 1 00:39:31.897 Starting thread on core 2 00:39:31.897 Starting thread on core 3 00:39:31.897 Starting thread on core 0 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:39:31.897 00:39:31.897 real 0m11.612s 00:39:31.897 user 0m21.421s 00:39:31.897 sys 0m3.950s 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:39:31.897 ************************************ 00:39:31.897 END TEST nvmf_target_disconnect_tc2 00:39:31.897 ************************************ 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:31.897 rmmod nvme_tcp 00:39:31.897 rmmod nvme_fabrics 00:39:31.897 rmmod nvme_keyring 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 1991014 ']' 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 1991014 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1991014 ']' 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1991014 00:39:31.897 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:39:31.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:31.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1991014 00:39:31.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:39:31.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:39:31.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1991014' 00:39:31.898 killing process with pid 1991014 00:39:31.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1991014 00:39:31.898 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1991014 00:39:32.159 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:32.159 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:32.159 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:32.159 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:39:32.159 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:39:32.159 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:32.159 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:39:32.159 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:32.159 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:32.159 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:32.159 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:32.159 14:35:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:34.076 14:35:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:34.076 00:39:34.076 real 0m22.201s 00:39:34.076 user 0m49.577s 00:39:34.076 sys 0m10.235s 00:39:34.076 14:35:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:34.076 14:35:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:34.076 ************************************ 00:39:34.076 END TEST nvmf_target_disconnect 00:39:34.076 ************************************ 00:39:34.336 14:35:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:34.336 00:39:34.336 real 8m3.184s 00:39:34.336 user 17m38.386s 00:39:34.336 sys 2m26.598s 00:39:34.336 14:35:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:34.336 14:35:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.336 ************************************ 00:39:34.336 END TEST nvmf_host 00:39:34.336 ************************************ 00:39:34.336 14:35:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:39:34.336 14:35:37 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:39:34.336 14:35:37 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:39:34.336 14:35:37 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:34.336 14:35:37 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:34.336 14:35:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:34.336 ************************************ 00:39:34.336 START TEST nvmf_target_core_interrupt_mode 00:39:34.336 ************************************ 00:39:34.336 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:39:34.336 * Looking for test storage... 00:39:34.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:39:34.336 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:34.336 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:39:34.336 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:34.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.598 --rc genhtml_branch_coverage=1 00:39:34.598 --rc genhtml_function_coverage=1 00:39:34.598 --rc genhtml_legend=1 00:39:34.598 --rc geninfo_all_blocks=1 00:39:34.598 --rc geninfo_unexecuted_blocks=1 00:39:34.598 00:39:34.598 ' 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:34.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.598 --rc genhtml_branch_coverage=1 00:39:34.598 --rc genhtml_function_coverage=1 00:39:34.598 --rc genhtml_legend=1 00:39:34.598 --rc geninfo_all_blocks=1 00:39:34.598 --rc geninfo_unexecuted_blocks=1 00:39:34.598 00:39:34.598 ' 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:34.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.598 --rc genhtml_branch_coverage=1 00:39:34.598 --rc genhtml_function_coverage=1 00:39:34.598 --rc genhtml_legend=1 00:39:34.598 --rc geninfo_all_blocks=1 00:39:34.598 --rc geninfo_unexecuted_blocks=1 00:39:34.598 00:39:34.598 ' 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:34.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.598 --rc genhtml_branch_coverage=1 00:39:34.598 --rc genhtml_function_coverage=1 00:39:34.598 --rc genhtml_legend=1 00:39:34.598 --rc geninfo_all_blocks=1 00:39:34.598 --rc geninfo_unexecuted_blocks=1 00:39:34.598 00:39:34.598 ' 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.598 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:34.599 ************************************ 00:39:34.599 START TEST nvmf_abort 00:39:34.599 ************************************ 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:39:34.599 * Looking for test storage... 00:39:34.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:39:34.599 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:34.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.860 --rc genhtml_branch_coverage=1 00:39:34.860 --rc genhtml_function_coverage=1 00:39:34.860 --rc genhtml_legend=1 00:39:34.860 --rc geninfo_all_blocks=1 00:39:34.860 --rc geninfo_unexecuted_blocks=1 00:39:34.860 00:39:34.860 ' 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:34.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.860 --rc genhtml_branch_coverage=1 00:39:34.860 --rc genhtml_function_coverage=1 00:39:34.860 --rc genhtml_legend=1 00:39:34.860 --rc geninfo_all_blocks=1 00:39:34.860 --rc geninfo_unexecuted_blocks=1 00:39:34.860 00:39:34.860 ' 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:34.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.860 --rc genhtml_branch_coverage=1 00:39:34.860 --rc genhtml_function_coverage=1 00:39:34.860 --rc genhtml_legend=1 00:39:34.860 --rc geninfo_all_blocks=1 00:39:34.860 --rc geninfo_unexecuted_blocks=1 00:39:34.860 00:39:34.860 ' 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:34.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:34.860 --rc genhtml_branch_coverage=1 00:39:34.860 --rc genhtml_function_coverage=1 00:39:34.860 --rc genhtml_legend=1 00:39:34.860 --rc geninfo_all_blocks=1 00:39:34.860 --rc geninfo_unexecuted_blocks=1 00:39:34.860 00:39:34.860 ' 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:34.860 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:34.861 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:34.861 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:34.861 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:34.861 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:34.861 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:39:34.861 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:39:34.861 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:34.861 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:34.861 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:34.861 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:34.861 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:34.861 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:34.861 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:34.861 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:34.861 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:34.861 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:34.861 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:39:34.861 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:42.999 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:43.000 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:43.000 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:43.000 Found net devices under 0000:31:00.0: cvl_0_0 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:43.000 Found net devices under 0000:31:00.1: cvl_0_1 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:43.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:43.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:39:43.000 00:39:43.000 --- 10.0.0.2 ping statistics --- 00:39:43.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:43.000 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:43.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:43.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:39:43.000 00:39:43.000 --- 10.0.0.1 ping statistics --- 00:39:43.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:43.000 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1996605 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1996605 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1996605 ']' 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:43.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:43.000 14:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:43.000 [2024-10-13 14:35:45.874543] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:43.000 [2024-10-13 14:35:45.875503] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:39:43.001 [2024-10-13 14:35:45.875538] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:43.001 [2024-10-13 14:35:46.012347] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:43.001 [2024-10-13 14:35:46.061298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:43.001 [2024-10-13 14:35:46.079066] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:43.001 [2024-10-13 14:35:46.079095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:43.001 [2024-10-13 14:35:46.079104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:43.001 [2024-10-13 14:35:46.079111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:43.001 [2024-10-13 14:35:46.079116] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:43.001 [2024-10-13 14:35:46.080604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:43.001 [2024-10-13 14:35:46.080755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:43.001 [2024-10-13 14:35:46.080758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:43.001 [2024-10-13 14:35:46.129057] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:43.001 [2024-10-13 14:35:46.130028] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:43.001 [2024-10-13 14:35:46.131072] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:43.001 [2024-10-13 14:35:46.131173] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:43.001 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:43.001 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:39:43.001 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:43.001 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:43.001 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:43.262 [2024-10-13 14:35:46.729618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:43.262 Malloc0 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:43.262 Delay0 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:43.262 [2024-10-13 14:35:46.833541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:43.262 14:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:39:43.523 [2024-10-13 14:35:47.023017] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:45.435 Initializing NVMe Controllers 00:39:45.435 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:45.435 controller IO queue size 128 less than required 00:39:45.435 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:39:45.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:39:45.435 Initialization complete. Launching workers. 00:39:45.435 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28673 00:39:45.435 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28730, failed to submit 66 00:39:45.435 success 28673, unsuccessful 57, failed 0 00:39:45.435 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:45.435 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.435 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:45.435 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.436 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:39:45.436 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:39:45.436 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:45.436 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:39:45.436 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:45.436 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:39:45.436 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:45.436 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:45.436 rmmod nvme_tcp 00:39:45.436 rmmod nvme_fabrics 00:39:45.698 rmmod nvme_keyring 00:39:45.698 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:45.698 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:39:45.698 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:39:45.698 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1996605 ']' 00:39:45.698 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1996605 00:39:45.698 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1996605 ']' 00:39:45.698 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1996605 00:39:45.698 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:39:45.698 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:45.698 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1996605 00:39:45.698 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:45.698 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:45.698 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1996605' 00:39:45.698 killing process with pid 1996605 00:39:45.698 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1996605 00:39:45.698 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1996605 00:39:45.959 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:45.959 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:45.959 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:45.959 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:39:45.959 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:39:45.959 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:45.959 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:39:45.959 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:45.959 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:45.959 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:45.959 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:45.959 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:47.871 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:47.871 00:39:47.871 real 0m13.358s 00:39:47.871 user 0m10.805s 00:39:47.871 sys 0m7.023s 00:39:47.871 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:47.871 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:47.871 ************************************ 00:39:47.871 END TEST nvmf_abort 00:39:47.871 ************************************ 00:39:47.871 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:39:47.871 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:47.871 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:47.871 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:48.132 ************************************ 00:39:48.132 START TEST nvmf_ns_hotplug_stress 00:39:48.132 ************************************ 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:39:48.132 * Looking for test storage... 00:39:48.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:48.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.132 --rc genhtml_branch_coverage=1 00:39:48.132 --rc genhtml_function_coverage=1 00:39:48.132 --rc genhtml_legend=1 00:39:48.132 --rc geninfo_all_blocks=1 00:39:48.132 --rc geninfo_unexecuted_blocks=1 00:39:48.132 00:39:48.132 ' 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:48.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.132 --rc genhtml_branch_coverage=1 00:39:48.132 --rc genhtml_function_coverage=1 00:39:48.132 --rc genhtml_legend=1 00:39:48.132 --rc geninfo_all_blocks=1 00:39:48.132 --rc geninfo_unexecuted_blocks=1 00:39:48.132 00:39:48.132 ' 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:48.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.132 --rc genhtml_branch_coverage=1 00:39:48.132 --rc genhtml_function_coverage=1 00:39:48.132 --rc genhtml_legend=1 00:39:48.132 --rc geninfo_all_blocks=1 00:39:48.132 --rc geninfo_unexecuted_blocks=1 00:39:48.132 00:39:48.132 ' 00:39:48.132 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:48.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.132 --rc genhtml_branch_coverage=1 00:39:48.132 --rc genhtml_function_coverage=1 00:39:48.132 --rc genhtml_legend=1 00:39:48.133 --rc geninfo_all_blocks=1 00:39:48.133 --rc geninfo_unexecuted_blocks=1 00:39:48.133 00:39:48.133 ' 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:48.133 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:48.395 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:48.395 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:48.395 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:39:48.395 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:56.528 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:56.528 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:39:56.528 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:56.529 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:56.529 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:56.529 Found net devices under 0000:31:00.0: cvl_0_0 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:56.529 Found net devices under 0000:31:00.1: cvl_0_1 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:56.529 14:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:56.529 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:56.529 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:56.529 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:56.529 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:56.529 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:56.529 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:56.529 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:56.529 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:56.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:56.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:39:56.529 00:39:56.529 --- 10.0.0.2 ping statistics --- 00:39:56.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:56.530 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:56.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:56.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:39:56.530 00:39:56.530 --- 10.0.0.1 ping statistics --- 00:39:56.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:56.530 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=2001574 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 2001574 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2001574 ']' 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:56.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:56.530 14:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:56.530 [2024-10-13 14:35:59.364461] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:56.530 [2024-10-13 14:35:59.365978] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:39:56.530 [2024-10-13 14:35:59.366040] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:56.530 [2024-10-13 14:35:59.507840] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:56.530 [2024-10-13 14:35:59.556510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:56.530 [2024-10-13 14:35:59.583449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:56.530 [2024-10-13 14:35:59.583493] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:56.530 [2024-10-13 14:35:59.583501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:56.530 [2024-10-13 14:35:59.583508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:56.530 [2024-10-13 14:35:59.583514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:56.530 [2024-10-13 14:35:59.585311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:56.530 [2024-10-13 14:35:59.585468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:56.530 [2024-10-13 14:35:59.585469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:56.530 [2024-10-13 14:35:59.648582] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:56.530 [2024-10-13 14:35:59.649474] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:56.530 [2024-10-13 14:35:59.650087] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:56.530 [2024-10-13 14:35:59.650225] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:56.530 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:56.530 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:39:56.530 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:56.530 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:56.530 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:56.530 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:56.530 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:39:56.530 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:56.790 [2024-10-13 14:36:00.358350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:56.790 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:57.050 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:57.050 [2024-10-13 14:36:00.695043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:57.050 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:57.311 14:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:39:57.570 Malloc0 00:39:57.570 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:57.570 Delay0 00:39:57.570 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:57.831 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:39:58.092 NULL1 00:39:58.092 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:39:58.092 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2002045 00:39:58.092 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:39:58.092 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:39:58.092 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:58.353 14:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:58.614 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:39:58.614 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:39:58.875 true 00:39:58.875 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:39:58.875 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:58.875 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:59.135 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:39:59.135 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:39:59.395 true 00:39:59.395 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:39:59.395 14:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:59.655 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:59.655 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:39:59.655 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:39:59.915 true 00:39:59.915 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:39:59.915 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:00.175 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:00.436 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:40:00.436 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:40:00.436 true 00:40:00.436 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:00.436 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:00.696 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:00.957 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:40:00.957 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:40:00.957 true 00:40:00.957 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:00.957 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:01.217 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:01.477 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:40:01.477 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:40:01.477 true 00:40:01.738 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:01.738 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:01.738 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:01.999 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:40:01.999 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:40:02.260 true 00:40:02.260 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:02.260 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:02.260 14:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:02.521 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:40:02.521 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:40:02.782 true 00:40:02.782 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:02.782 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:03.043 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:03.043 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:40:03.043 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:40:03.303 true 00:40:03.303 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:03.303 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:03.564 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:03.564 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:40:03.564 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:40:03.824 true 00:40:03.824 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:03.824 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:04.085 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:04.349 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:40:04.349 14:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:40:04.349 true 00:40:04.349 14:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:04.349 14:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:04.613 14:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:04.873 14:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:40:04.873 14:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:40:04.873 true 00:40:04.873 14:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:04.873 14:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:05.133 14:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:05.392 14:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:40:05.392 14:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:40:05.392 true 00:40:05.392 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:05.392 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:05.653 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:05.914 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:40:05.914 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:40:05.914 true 00:40:06.174 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:06.174 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:06.174 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:06.435 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:40:06.435 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:40:06.435 true 00:40:06.697 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:06.697 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:06.697 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:06.958 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:40:06.958 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:40:06.958 true 00:40:07.219 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:07.219 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:07.219 14:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:07.480 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:40:07.480 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:40:07.741 true 00:40:07.741 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:07.741 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:07.741 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:08.002 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:40:08.002 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:40:08.263 true 00:40:08.263 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:08.263 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:08.263 14:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:08.523 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:40:08.523 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:40:08.783 true 00:40:08.783 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:08.783 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:09.044 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:09.044 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:40:09.044 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:40:09.305 true 00:40:09.305 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:09.305 14:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:09.567 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:09.827 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:40:09.827 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:40:09.827 true 00:40:09.827 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:09.827 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:10.089 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:10.349 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:40:10.349 14:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:40:10.349 true 00:40:10.349 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:10.349 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:10.608 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:10.868 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:40:10.868 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:40:10.868 true 00:40:11.128 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:11.128 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:11.128 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:11.388 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:40:11.388 14:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:40:11.648 true 00:40:11.648 14:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:11.648 14:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:11.648 14:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:11.909 14:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:40:11.909 14:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:40:12.168 true 00:40:12.168 14:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:12.168 14:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:12.168 14:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:12.429 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:40:12.429 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:40:12.689 true 00:40:12.689 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:12.689 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:12.950 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:12.950 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:40:12.950 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:40:13.210 true 00:40:13.210 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:13.210 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:13.470 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:13.732 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:40:13.732 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:40:13.732 true 00:40:13.732 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:13.732 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:13.992 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:14.252 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:40:14.252 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:40:14.252 true 00:40:14.252 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:14.252 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:14.512 14:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:14.772 14:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:40:14.772 14:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:40:14.772 true 00:40:14.772 14:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:14.772 14:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:15.032 14:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:15.293 14:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:40:15.293 14:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:40:15.553 true 00:40:15.553 14:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:15.553 14:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:15.553 14:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:15.814 14:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:40:15.814 14:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:40:16.075 true 00:40:16.075 14:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:16.075 14:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:16.075 14:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:16.337 14:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:40:16.337 14:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:40:16.598 true 00:40:16.598 14:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:16.598 14:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:16.860 14:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:16.860 14:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:40:16.860 14:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:40:17.121 true 00:40:17.121 14:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:17.121 14:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:17.383 14:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:17.383 14:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:40:17.383 14:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:40:17.644 true 00:40:17.644 14:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:17.644 14:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:17.904 14:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:17.904 14:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:40:17.904 14:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:40:18.164 true 00:40:18.164 14:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:18.164 14:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:18.425 14:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:18.685 14:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:40:18.685 14:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:40:18.685 true 00:40:18.685 14:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:18.685 14:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:18.945 14:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:19.205 14:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:40:19.205 14:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:40:19.205 true 00:40:19.205 14:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:19.205 14:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:19.466 14:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:19.726 14:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:40:19.726 14:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:40:19.986 true 00:40:19.986 14:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:19.986 14:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:19.986 14:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:20.247 14:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:40:20.247 14:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:40:20.507 true 00:40:20.507 14:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:20.507 14:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:20.507 14:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:20.768 14:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:40:20.768 14:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:40:21.028 true 00:40:21.028 14:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:21.028 14:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:21.288 14:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:21.288 14:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:40:21.288 14:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:40:21.564 true 00:40:21.564 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:21.564 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:21.831 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:21.831 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:40:21.831 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:40:22.130 true 00:40:22.130 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:22.130 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:22.463 14:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:22.463 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:40:22.463 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:40:22.723 true 00:40:22.723 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:22.723 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:22.723 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:22.983 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:40:22.983 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:40:23.243 true 00:40:23.243 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:23.243 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:23.503 14:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:23.503 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:40:23.503 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:40:23.764 true 00:40:23.764 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:23.764 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:24.025 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:24.025 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:40:24.025 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:40:24.286 true 00:40:24.286 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:24.286 14:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:24.545 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:24.804 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:40:24.804 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:40:24.804 true 00:40:24.804 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:24.804 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:25.064 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:25.324 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:40:25.324 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:40:25.324 true 00:40:25.324 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:25.324 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:25.585 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:25.846 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:40:25.846 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:40:25.846 true 00:40:26.106 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:26.106 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:26.106 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:26.365 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:40:26.365 14:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:40:26.625 true 00:40:26.625 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:26.625 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:26.625 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:26.886 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:40:26.886 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:40:27.146 true 00:40:27.146 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:27.146 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:27.406 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:27.406 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:40:27.406 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:40:27.666 true 00:40:27.666 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:27.666 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:27.927 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:27.927 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:40:27.927 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:40:28.187 true 00:40:28.187 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:28.187 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:28.447 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:28.447 Initializing NVMe Controllers 00:40:28.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:28.447 Controller IO queue size 128, less than required. 00:40:28.447 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:28.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:40:28.447 Initialization complete. Launching workers. 00:40:28.447 ======================================================== 00:40:28.447 Latency(us) 00:40:28.447 Device Information : IOPS MiB/s Average min max 00:40:28.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30532.70 14.91 4192.14 1122.92 11582.26 00:40:28.447 ======================================================== 00:40:28.447 Total : 30532.70 14.91 4192.14 1122.92 11582.26 00:40:28.447 00:40:28.707 14:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:40:28.707 14:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:40:28.707 true 00:40:28.707 14:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2002045 00:40:28.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2002045) - No such process 00:40:28.707 14:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2002045 00:40:28.707 14:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:28.967 14:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:29.228 14:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:40:29.228 14:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:40:29.228 14:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:40:29.228 14:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:29.228 14:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:40:29.228 null0 00:40:29.228 14:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:29.228 14:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:29.228 14:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:40:29.489 null1 00:40:29.489 14:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:29.489 14:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:29.489 14:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:40:29.750 null2 00:40:29.750 14:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:29.750 14:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:29.750 14:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:40:29.750 null3 00:40:29.750 14:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:29.750 14:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:29.750 14:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:40:30.010 null4 00:40:30.010 14:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:30.010 14:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:30.010 14:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:40:30.271 null5 00:40:30.271 14:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:30.271 14:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:30.271 14:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:40:30.271 null6 00:40:30.271 14:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:30.271 14:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:30.271 14:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:40:30.531 null7 00:40:30.531 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:40:30.531 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:40:30.531 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:40:30.531 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:30.531 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2008680 2008681 2008683 2008685 2008688 2008689 2008691 2008693 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:30.532 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:30.794 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:30.794 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:30.794 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:30.794 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:30.794 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:30.794 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:30.794 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:30.794 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:30.794 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:30.794 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:30.794 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:31.057 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.319 14:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:31.581 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:31.581 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:31.581 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:31.581 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:31.581 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:31.581 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.581 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.581 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:31.581 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:31.581 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:31.581 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.581 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.581 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:31.581 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.581 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.581 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.581 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.581 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:31.581 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:31.843 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:32.105 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:32.365 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:32.366 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:32.366 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:32.366 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:32.366 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:32.366 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:32.366 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.366 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.366 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:32.366 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.366 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.366 14:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:32.366 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.366 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.366 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:32.366 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.366 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.366 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:32.366 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.366 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.366 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:32.628 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:32.889 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.889 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.889 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:32.889 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.889 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.889 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:32.889 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.889 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.889 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:32.889 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.889 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.890 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:32.890 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.890 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.890 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:32.890 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.890 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.890 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:32.890 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:32.890 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:32.890 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:32.890 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:32.890 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:32.890 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:32.890 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:33.150 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:33.151 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:33.412 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.412 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.412 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:33.412 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:33.412 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:33.412 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:33.412 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:33.412 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.412 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.412 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:33.412 14:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:33.412 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.412 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.412 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:33.412 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:33.412 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.412 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.412 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:33.412 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.412 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.412 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:33.674 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.674 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.674 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:33.674 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:33.674 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.674 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.674 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:33.674 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.674 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.674 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:33.674 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.674 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.674 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:33.674 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:33.674 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:33.674 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:33.674 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:33.936 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:40:34.198 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:40:34.198 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.198 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.198 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:40:34.198 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:40:34.198 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:40:34.198 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.198 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.198 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.198 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.198 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.198 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.198 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.198 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.198 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:40:34.198 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:40:34.198 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.198 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.467 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.467 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.467 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.467 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.467 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:40:34.467 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:40:34.467 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:40:34.467 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:40:34.467 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:34.467 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:40:34.467 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:34.467 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:40:34.467 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:34.467 14:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:34.467 rmmod nvme_tcp 00:40:34.467 rmmod nvme_fabrics 00:40:34.467 rmmod nvme_keyring 00:40:34.467 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:34.467 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:40:34.467 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:40:34.467 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 2001574 ']' 00:40:34.467 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 2001574 00:40:34.467 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2001574 ']' 00:40:34.467 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2001574 00:40:34.467 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:40:34.467 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:34.467 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2001574 00:40:34.467 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:34.467 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:34.467 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2001574' 00:40:34.467 killing process with pid 2001574 00:40:34.467 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2001574 00:40:34.467 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2001574 00:40:34.728 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:34.728 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:34.728 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:34.728 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:40:34.728 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:40:34.728 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:34.728 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:40:34.728 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:34.728 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:34.728 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:34.728 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:34.728 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:36.643 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:36.904 00:40:36.904 real 0m48.765s 00:40:36.904 user 3m1.806s 00:40:36.904 sys 0m23.271s 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:40:36.904 ************************************ 00:40:36.904 END TEST nvmf_ns_hotplug_stress 00:40:36.904 ************************************ 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:36.904 ************************************ 00:40:36.904 START TEST nvmf_delete_subsystem 00:40:36.904 ************************************ 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:40:36.904 * Looking for test storage... 00:40:36.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:36.904 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:40:37.166 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:40:37.166 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:37.166 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:40:37.166 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:40:37.166 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:40:37.166 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:40:37.166 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:37.166 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:40:37.166 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:40:37.166 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:37.166 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:37.166 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:40:37.166 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:37.166 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:37.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:37.166 --rc genhtml_branch_coverage=1 00:40:37.166 --rc genhtml_function_coverage=1 00:40:37.166 --rc genhtml_legend=1 00:40:37.166 --rc geninfo_all_blocks=1 00:40:37.166 --rc geninfo_unexecuted_blocks=1 00:40:37.166 00:40:37.166 ' 00:40:37.166 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:37.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:37.166 --rc genhtml_branch_coverage=1 00:40:37.166 --rc genhtml_function_coverage=1 00:40:37.166 --rc genhtml_legend=1 00:40:37.166 --rc geninfo_all_blocks=1 00:40:37.166 --rc geninfo_unexecuted_blocks=1 00:40:37.166 00:40:37.166 ' 00:40:37.166 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:37.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:37.166 --rc genhtml_branch_coverage=1 00:40:37.166 --rc genhtml_function_coverage=1 00:40:37.166 --rc genhtml_legend=1 00:40:37.166 --rc geninfo_all_blocks=1 00:40:37.166 --rc geninfo_unexecuted_blocks=1 00:40:37.166 00:40:37.166 ' 00:40:37.166 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:37.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:37.166 --rc genhtml_branch_coverage=1 00:40:37.166 --rc genhtml_function_coverage=1 00:40:37.166 --rc genhtml_legend=1 00:40:37.166 --rc geninfo_all_blocks=1 00:40:37.166 --rc geninfo_unexecuted_blocks=1 00:40:37.166 00:40:37.166 ' 00:40:37.166 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:40:37.167 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:45.307 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:45.308 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:45.308 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:45.308 Found net devices under 0000:31:00.0: cvl_0_0 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:45.308 Found net devices under 0000:31:00.1: cvl_0_1 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:45.308 14:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:45.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:45.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:40:45.308 00:40:45.308 --- 10.0.0.2 ping statistics --- 00:40:45.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:45.308 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:45.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:45.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:40:45.308 00:40:45.308 --- 10.0.0.1 ping statistics --- 00:40:45.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:45.308 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=2013901 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 2013901 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:45.308 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2013901 ']' 00:40:45.309 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:45.309 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:45.309 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:45.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:45.309 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:45.309 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:45.309 [2024-10-13 14:36:48.327760] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:45.309 [2024-10-13 14:36:48.329284] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:40:45.309 [2024-10-13 14:36:48.329345] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:45.309 [2024-10-13 14:36:48.471511] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:45.309 [2024-10-13 14:36:48.521204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:45.309 [2024-10-13 14:36:48.547900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:45.309 [2024-10-13 14:36:48.547944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:45.309 [2024-10-13 14:36:48.547953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:45.309 [2024-10-13 14:36:48.547960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:45.309 [2024-10-13 14:36:48.547966] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:45.309 [2024-10-13 14:36:48.549581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:45.309 [2024-10-13 14:36:48.549584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:45.309 [2024-10-13 14:36:48.612784] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:45.309 [2024-10-13 14:36:48.613300] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:45.309 [2024-10-13 14:36:48.613641] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:45.570 [2024-10-13 14:36:49.178632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:45.570 [2024-10-13 14:36:49.211102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:45.570 NULL1 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:45.570 Delay0 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2014060 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:40:45.570 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:40:45.831 [2024-10-13 14:36:49.421181] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:40:47.745 14:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:47.745 14:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:47.745 14:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 starting I/O failed: -6 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 starting I/O failed: -6 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 starting I/O failed: -6 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 starting I/O failed: -6 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 starting I/O failed: -6 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 starting I/O failed: -6 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 starting I/O failed: -6 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 starting I/O failed: -6 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 starting I/O failed: -6 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 starting I/O failed: -6 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 starting I/O failed: -6 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 [2024-10-13 14:36:51.580888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a41b0 is same with the state(6) to be set 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 [2024-10-13 14:36:51.581773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3df0 is same with the state(6) to be set 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 starting I/O failed: -6 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 starting I/O failed: -6 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 starting I/O failed: -6 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 starting I/O failed: -6 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 starting I/O failed: -6 00:40:48.006 Write completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 starting I/O failed: -6 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.006 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Write completed with error (sct=0, sc=8) 00:40:48.007 starting I/O failed: -6 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Write completed with error (sct=0, sc=8) 00:40:48.007 Write completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 starting I/O failed: -6 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 starting I/O failed: -6 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Write completed with error (sct=0, sc=8) 00:40:48.007 starting I/O failed: -6 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 [2024-10-13 14:36:51.586058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3ab0000c00 is same with the state(6) to be set 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Write completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Write completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Write completed with error (sct=0, sc=8) 00:40:48.007 Write completed with error (sct=0, sc=8) 00:40:48.007 Write completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Write completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Write completed with error (sct=0, sc=8) 00:40:48.007 Write completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Write completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Write completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.007 Write completed with error (sct=0, sc=8) 00:40:48.007 Read completed with error (sct=0, sc=8) 00:40:48.948 [2024-10-13 14:36:52.563241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8ee0 is same with the state(6) to be set 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 [2024-10-13 14:36:52.581628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a4390 is same with the state(6) to be set 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 [2024-10-13 14:36:52.581882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a3fd0 is same with the state(6) to be set 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 [2024-10-13 14:36:52.585354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3ab000cfe0 is same with the state(6) to be set 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Write completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 Read completed with error (sct=0, sc=8) 00:40:48.948 [2024-10-13 14:36:52.585458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3ab000d780 is same with the state(6) to be set 00:40:48.948 Initializing NVMe Controllers 00:40:48.948 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:48.948 Controller IO queue size 128, less than required. 00:40:48.948 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:48.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:48.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:48.948 Initialization complete. Launching workers. 00:40:48.948 ======================================================== 00:40:48.948 Latency(us) 00:40:48.948 Device Information : IOPS MiB/s Average min max 00:40:48.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.84 0.08 896904.68 489.71 1006591.34 00:40:48.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.38 0.08 1001578.35 297.92 2002130.70 00:40:48.948 ======================================================== 00:40:48.948 Total : 327.22 0.16 947568.64 297.92 2002130.70 00:40:48.949 00:40:48.949 [2024-10-13 14:36:52.585967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a8ee0 (9): Bad file descriptor 00:40:48.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:40:48.949 14:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.949 14:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:40:48.949 14:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2014060 00:40:48.949 14:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2014060 00:40:49.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2014060) - No such process 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2014060 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2014060 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2014060 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:49.520 [2024-10-13 14:36:53.118907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2014824 00:40:49.520 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:40:49.521 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2014824 00:40:49.521 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:40:49.521 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:49.781 [2024-10-13 14:36:53.306645] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:40:50.041 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:50.041 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2014824 00:40:50.041 14:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:50.611 14:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:50.611 14:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2014824 00:40:50.611 14:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:51.182 14:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:51.182 14:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2014824 00:40:51.182 14:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:51.752 14:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:51.752 14:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2014824 00:40:51.752 14:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:52.012 14:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:52.012 14:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2014824 00:40:52.012 14:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:52.582 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:52.582 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2014824 00:40:52.583 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:52.843 Initializing NVMe Controllers 00:40:52.843 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:52.843 Controller IO queue size 128, less than required. 00:40:52.843 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:52.843 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:52.843 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:52.843 Initialization complete. Launching workers. 00:40:52.843 ======================================================== 00:40:52.843 Latency(us) 00:40:52.843 Device Information : IOPS MiB/s Average min max 00:40:52.843 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002029.01 1000095.07 1005768.97 00:40:52.843 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003770.76 1000066.81 1009229.35 00:40:52.843 ======================================================== 00:40:52.843 Total : 256.00 0.12 1002899.88 1000066.81 1009229.35 00:40:52.843 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2014824 00:40:53.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2014824) - No such process 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2014824 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:53.104 rmmod nvme_tcp 00:40:53.104 rmmod nvme_fabrics 00:40:53.104 rmmod nvme_keyring 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 2013901 ']' 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 2013901 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2013901 ']' 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2013901 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:53.104 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2013901 00:40:53.364 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:53.364 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:53.364 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2013901' 00:40:53.364 killing process with pid 2013901 00:40:53.364 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2013901 00:40:53.365 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2013901 00:40:53.365 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:53.365 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:53.365 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:53.365 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:40:53.365 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:40:53.365 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:53.365 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:40:53.365 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:53.365 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:53.365 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:53.365 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:53.365 14:36:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:55.908 14:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:55.908 00:40:55.908 real 0m18.567s 00:40:55.908 user 0m26.579s 00:40:55.908 sys 0m7.652s 00:40:55.908 14:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:55.908 14:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:55.908 ************************************ 00:40:55.908 END TEST nvmf_delete_subsystem 00:40:55.908 ************************************ 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:55.908 ************************************ 00:40:55.908 START TEST nvmf_host_management 00:40:55.908 ************************************ 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:40:55.908 * Looking for test storage... 00:40:55.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:55.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.908 --rc genhtml_branch_coverage=1 00:40:55.908 --rc genhtml_function_coverage=1 00:40:55.908 --rc genhtml_legend=1 00:40:55.908 --rc geninfo_all_blocks=1 00:40:55.908 --rc geninfo_unexecuted_blocks=1 00:40:55.908 00:40:55.908 ' 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:55.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.908 --rc genhtml_branch_coverage=1 00:40:55.908 --rc genhtml_function_coverage=1 00:40:55.908 --rc genhtml_legend=1 00:40:55.908 --rc geninfo_all_blocks=1 00:40:55.908 --rc geninfo_unexecuted_blocks=1 00:40:55.908 00:40:55.908 ' 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:55.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.908 --rc genhtml_branch_coverage=1 00:40:55.908 --rc genhtml_function_coverage=1 00:40:55.908 --rc genhtml_legend=1 00:40:55.908 --rc geninfo_all_blocks=1 00:40:55.908 --rc geninfo_unexecuted_blocks=1 00:40:55.908 00:40:55.908 ' 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:55.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.908 --rc genhtml_branch_coverage=1 00:40:55.908 --rc genhtml_function_coverage=1 00:40:55.908 --rc genhtml_legend=1 00:40:55.908 --rc geninfo_all_blocks=1 00:40:55.908 --rc geninfo_unexecuted_blocks=1 00:40:55.908 00:40:55.908 ' 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:55.908 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:40:55.909 14:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:04.043 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:04.043 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:04.043 Found net devices under 0000:31:00.0: cvl_0_0 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:04.043 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:04.044 Found net devices under 0000:31:00.1: cvl_0_1 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:04.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:04.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:41:04.044 00:41:04.044 --- 10.0.0.2 ping statistics --- 00:41:04.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:04.044 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:04.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:04.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:41:04.044 00:41:04.044 --- 10.0.0.1 ping statistics --- 00:41:04.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:04.044 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=2019662 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 2019662 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2019662 ']' 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:04.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:04.044 14:37:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:04.044 [2024-10-13 14:37:06.764944] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:04.044 [2024-10-13 14:37:06.765910] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:41:04.044 [2024-10-13 14:37:06.765946] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:04.044 [2024-10-13 14:37:06.902525] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:04.044 [2024-10-13 14:37:06.951319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:04.044 [2024-10-13 14:37:06.970200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:04.044 [2024-10-13 14:37:06.970232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:04.044 [2024-10-13 14:37:06.970240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:04.044 [2024-10-13 14:37:06.970246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:04.044 [2024-10-13 14:37:06.970252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:04.044 [2024-10-13 14:37:06.971955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:04.044 [2024-10-13 14:37:06.972118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:04.044 [2024-10-13 14:37:06.972339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:41:04.044 [2024-10-13 14:37:06.972339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:04.044 [2024-10-13 14:37:07.020975] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:04.044 [2024-10-13 14:37:07.022117] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:04.044 [2024-10-13 14:37:07.022462] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:04.044 [2024-10-13 14:37:07.023096] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:04.044 [2024-10-13 14:37:07.023160] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:04.044 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:04.044 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:41:04.044 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:04.044 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:04.044 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:04.044 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:04.044 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:04.044 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:04.044 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:04.044 [2024-10-13 14:37:07.589243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:04.044 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:04.044 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:41:04.044 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:04.044 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:04.045 Malloc0 00:41:04.045 [2024-10-13 14:37:07.681505] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2019937 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2019937 /var/tmp/bdevperf.sock 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2019937 ']' 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:04.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:04.045 { 00:41:04.045 "params": { 00:41:04.045 "name": "Nvme$subsystem", 00:41:04.045 "trtype": "$TEST_TRANSPORT", 00:41:04.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:04.045 "adrfam": "ipv4", 00:41:04.045 "trsvcid": "$NVMF_PORT", 00:41:04.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:04.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:04.045 "hdgst": ${hdgst:-false}, 00:41:04.045 "ddgst": ${ddgst:-false} 00:41:04.045 }, 00:41:04.045 "method": "bdev_nvme_attach_controller" 00:41:04.045 } 00:41:04.045 EOF 00:41:04.045 )") 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:41:04.045 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:41:04.309 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:41:04.309 14:37:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:04.309 "params": { 00:41:04.309 "name": "Nvme0", 00:41:04.309 "trtype": "tcp", 00:41:04.309 "traddr": "10.0.0.2", 00:41:04.309 "adrfam": "ipv4", 00:41:04.309 "trsvcid": "4420", 00:41:04.309 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:04.309 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:04.309 "hdgst": false, 00:41:04.309 "ddgst": false 00:41:04.309 }, 00:41:04.309 "method": "bdev_nvme_attach_controller" 00:41:04.309 }' 00:41:04.309 [2024-10-13 14:37:07.786214] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:41:04.309 [2024-10-13 14:37:07.786271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2019937 ] 00:41:04.309 [2024-10-13 14:37:07.917512] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:04.309 [2024-10-13 14:37:07.966533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:04.309 [2024-10-13 14:37:07.994781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:04.576 Running I/O for 10 seconds... 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=686 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 686 -ge 100 ']' 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.150 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:05.150 [2024-10-13 14:37:08.672934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.672991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.673005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.673015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.673026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.673035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.673045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.673055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.673072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.673082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.673092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.673102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.673111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.673121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.673131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.673151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.673161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.673170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.673180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.673190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.150 [2024-10-13 14:37:08.673200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.673520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107eee0 is same with the state(6) to be set 00:41:05.151 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.151 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:41:05.151 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.151 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:05.151 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.151 14:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:41:05.151 [2024-10-13 14:37:08.690418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:05.151 [2024-10-13 14:37:08.690474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:05.151 [2024-10-13 14:37:08.690494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:05.151 [2024-10-13 14:37:08.690511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:05.151 [2024-10-13 14:37:08.690527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885dc0 is same with the state(6) to be set 00:41:05.151 [2024-10-13 14:37:08.690646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.690658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.690691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.690709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.690728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.690745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.690763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.690780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.690797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.690815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.690833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.690852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.690869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.690888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.690905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.690926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.690944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.690962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.690980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.690989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.690996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.151 [2024-10-13 14:37:08.691007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.151 [2024-10-13 14:37:08.691014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.152 [2024-10-13 14:37:08.691732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.152 [2024-10-13 14:37:08.691739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.153 [2024-10-13 14:37:08.691749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.153 [2024-10-13 14:37:08.691757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.153 [2024-10-13 14:37:08.691766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.153 [2024-10-13 14:37:08.691774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.153 [2024-10-13 14:37:08.691784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.153 [2024-10-13 14:37:08.691791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.153 [2024-10-13 14:37:08.691801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:05.153 [2024-10-13 14:37:08.691811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:05.153 [2024-10-13 14:37:08.691891] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa9f020 was disconnected and freed. reset controller. 00:41:05.153 [2024-10-13 14:37:08.693097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:05.153 task offset: 97920 on job bdev=Nvme0n1 fails 00:41:05.153 00:41:05.153 Latency(us) 00:41:05.153 [2024-10-13T12:37:08.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:05.153 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:41:05.153 Job: Nvme0n1 ended in about 0.55 seconds with error 00:41:05.153 Verification LBA range: start 0x0 length 0x400 00:41:05.153 Nvme0n1 : 0.55 1398.45 87.40 116.99 0.00 41195.31 1833.83 35472.21 00:41:05.153 [2024-10-13T12:37:08.860Z] =================================================================================================================== 00:41:05.153 [2024-10-13T12:37:08.860Z] Total : 1398.45 87.40 116.99 0.00 41195.31 1833.83 35472.21 00:41:05.153 [2024-10-13 14:37:08.695321] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:05.153 [2024-10-13 14:37:08.695358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x885dc0 (9): Bad file descriptor 00:41:05.153 [2024-10-13 14:37:08.787198] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:41:06.092 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2019937 00:41:06.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2019937) - No such process 00:41:06.092 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:41:06.092 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:41:06.092 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:41:06.092 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:41:06.092 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:41:06.092 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:41:06.092 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:06.092 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:06.092 { 00:41:06.092 "params": { 00:41:06.092 "name": "Nvme$subsystem", 00:41:06.092 "trtype": "$TEST_TRANSPORT", 00:41:06.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:06.092 "adrfam": "ipv4", 00:41:06.092 "trsvcid": "$NVMF_PORT", 00:41:06.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:06.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:06.092 "hdgst": ${hdgst:-false}, 00:41:06.092 "ddgst": ${ddgst:-false} 00:41:06.092 }, 00:41:06.092 "method": "bdev_nvme_attach_controller" 00:41:06.092 } 00:41:06.092 EOF 00:41:06.092 )") 00:41:06.093 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:41:06.093 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:41:06.093 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:41:06.093 14:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:06.093 "params": { 00:41:06.093 "name": "Nvme0", 00:41:06.093 "trtype": "tcp", 00:41:06.093 "traddr": "10.0.0.2", 00:41:06.093 "adrfam": "ipv4", 00:41:06.093 "trsvcid": "4420", 00:41:06.093 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:06.093 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:06.093 "hdgst": false, 00:41:06.093 "ddgst": false 00:41:06.093 }, 00:41:06.093 "method": "bdev_nvme_attach_controller" 00:41:06.093 }' 00:41:06.093 [2024-10-13 14:37:09.750439] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:41:06.093 [2024-10-13 14:37:09.750493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020360 ] 00:41:06.353 [2024-10-13 14:37:09.880853] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:06.353 [2024-10-13 14:37:09.928101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:06.353 [2024-10-13 14:37:09.946115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:06.612 Running I/O for 1 seconds... 00:41:07.554 1408.00 IOPS, 88.00 MiB/s 00:41:07.554 Latency(us) 00:41:07.554 [2024-10-13T12:37:11.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:07.554 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:41:07.554 Verification LBA range: start 0x0 length 0x400 00:41:07.554 Nvme0n1 : 1.02 1443.72 90.23 0.00 0.00 43607.06 10291.32 33939.46 00:41:07.554 [2024-10-13T12:37:11.261Z] =================================================================================================================== 00:41:07.554 [2024-10-13T12:37:11.261Z] Total : 1443.72 90.23 0.00 0.00 43607.06 10291.32 33939.46 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:07.815 rmmod nvme_tcp 00:41:07.815 rmmod nvme_fabrics 00:41:07.815 rmmod nvme_keyring 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 2019662 ']' 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 2019662 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2019662 ']' 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2019662 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2019662 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2019662' 00:41:07.815 killing process with pid 2019662 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2019662 00:41:07.815 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2019662 00:41:08.076 [2024-10-13 14:37:11.581389] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:41:08.076 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:08.076 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:08.076 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:08.076 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:41:08.076 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:41:08.076 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:08.076 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:41:08.076 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:08.076 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:08.076 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:08.076 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:08.076 14:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:09.987 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:10.247 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:41:10.247 00:41:10.247 real 0m14.622s 00:41:10.247 user 0m19.636s 00:41:10.247 sys 0m7.296s 00:41:10.247 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:10.247 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:41:10.247 ************************************ 00:41:10.247 END TEST nvmf_host_management 00:41:10.247 ************************************ 00:41:10.247 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:41:10.247 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:10.247 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:10.247 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:10.247 ************************************ 00:41:10.247 START TEST nvmf_lvol 00:41:10.247 ************************************ 00:41:10.247 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:41:10.247 * Looking for test storage... 00:41:10.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:10.247 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:10.247 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:41:10.247 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:10.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.514 --rc genhtml_branch_coverage=1 00:41:10.514 --rc genhtml_function_coverage=1 00:41:10.514 --rc genhtml_legend=1 00:41:10.514 --rc geninfo_all_blocks=1 00:41:10.514 --rc geninfo_unexecuted_blocks=1 00:41:10.514 00:41:10.514 ' 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:10.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.514 --rc genhtml_branch_coverage=1 00:41:10.514 --rc genhtml_function_coverage=1 00:41:10.514 --rc genhtml_legend=1 00:41:10.514 --rc geninfo_all_blocks=1 00:41:10.514 --rc geninfo_unexecuted_blocks=1 00:41:10.514 00:41:10.514 ' 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:10.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.514 --rc genhtml_branch_coverage=1 00:41:10.514 --rc genhtml_function_coverage=1 00:41:10.514 --rc genhtml_legend=1 00:41:10.514 --rc geninfo_all_blocks=1 00:41:10.514 --rc geninfo_unexecuted_blocks=1 00:41:10.514 00:41:10.514 ' 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:10.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.514 --rc genhtml_branch_coverage=1 00:41:10.514 --rc genhtml_function_coverage=1 00:41:10.514 --rc genhtml_legend=1 00:41:10.514 --rc geninfo_all_blocks=1 00:41:10.514 --rc geninfo_unexecuted_blocks=1 00:41:10.514 00:41:10.514 ' 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:10.514 14:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:10.514 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:10.514 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:10.514 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:10.514 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:41:10.515 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:18.741 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:18.741 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:18.741 Found net devices under 0000:31:00.0: cvl_0_0 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:18.741 Found net devices under 0000:31:00.1: cvl_0_1 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:18.741 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:18.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:18.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:41:18.742 00:41:18.742 --- 10.0.0.2 ping statistics --- 00:41:18.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:18.742 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:18.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:18.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:41:18.742 00:41:18.742 --- 10.0.0.1 ping statistics --- 00:41:18.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:18.742 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=2024814 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 2024814 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2024814 ']' 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:18.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:18.742 14:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:18.742 [2024-10-13 14:37:21.478296] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:18.742 [2024-10-13 14:37:21.479492] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:41:18.742 [2024-10-13 14:37:21.479546] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:18.742 [2024-10-13 14:37:21.623178] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:18.742 [2024-10-13 14:37:21.673522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:18.742 [2024-10-13 14:37:21.700927] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:18.742 [2024-10-13 14:37:21.700970] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:18.742 [2024-10-13 14:37:21.700979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:18.742 [2024-10-13 14:37:21.700986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:18.742 [2024-10-13 14:37:21.700993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:18.742 [2024-10-13 14:37:21.702660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:18.742 [2024-10-13 14:37:21.702819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:18.742 [2024-10-13 14:37:21.702819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:18.742 [2024-10-13 14:37:21.761769] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:18.742 [2024-10-13 14:37:21.762768] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:18.742 [2024-10-13 14:37:21.763614] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:18.742 [2024-10-13 14:37:21.763714] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:18.742 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:18.742 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:41:18.742 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:18.742 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:18.742 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:18.742 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:18.742 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:19.003 [2024-10-13 14:37:22.471693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:19.003 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:19.003 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:41:19.264 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:19.264 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:41:19.264 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:41:19.524 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:41:19.785 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=22b55f7e-8f4c-4e07-b028-b1f3a8e49aec 00:41:19.785 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 22b55f7e-8f4c-4e07-b028-b1f3a8e49aec lvol 20 00:41:19.785 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a191b638-3a27-48a4-bcb1-833431682426 00:41:19.785 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:20.046 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a191b638-3a27-48a4-bcb1-833431682426 00:41:20.308 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:20.308 [2024-10-13 14:37:23.947668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:20.308 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:20.570 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:41:20.570 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2025439 00:41:20.570 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:41:21.510 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a191b638-3a27-48a4-bcb1-833431682426 MY_SNAPSHOT 00:41:21.770 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d6997be7-eb0d-4b39-80cb-d55c64b14eb5 00:41:21.770 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a191b638-3a27-48a4-bcb1-833431682426 30 00:41:22.031 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d6997be7-eb0d-4b39-80cb-d55c64b14eb5 MY_CLONE 00:41:22.292 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=be150fde-64fb-42c4-9313-d0f34f97acb5 00:41:22.292 14:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate be150fde-64fb-42c4-9313-d0f34f97acb5 00:41:22.866 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2025439 00:41:31.000 Initializing NVMe Controllers 00:41:31.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:41:31.000 Controller IO queue size 128, less than required. 00:41:31.000 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:31.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:41:31.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:41:31.000 Initialization complete. Launching workers. 00:41:31.000 ======================================================== 00:41:31.000 Latency(us) 00:41:31.000 Device Information : IOPS MiB/s Average min max 00:41:31.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15202.80 59.39 8421.64 1561.40 69272.47 00:41:31.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15408.40 60.19 8307.41 3750.83 59026.58 00:41:31.000 ======================================================== 00:41:31.000 Total : 30611.20 119.57 8364.14 1561.40 69272.47 00:41:31.000 00:41:31.000 14:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:31.261 14:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a191b638-3a27-48a4-bcb1-833431682426 00:41:31.261 14:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 22b55f7e-8f4c-4e07-b028-b1f3a8e49aec 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:31.521 rmmod nvme_tcp 00:41:31.521 rmmod nvme_fabrics 00:41:31.521 rmmod nvme_keyring 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 2024814 ']' 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 2024814 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2024814 ']' 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2024814 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2024814 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2024814' 00:41:31.521 killing process with pid 2024814 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2024814 00:41:31.521 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2024814 00:41:31.781 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:31.781 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:31.781 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:31.781 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:41:31.781 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:41:31.781 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:31.781 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:41:31.781 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:31.781 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:31.781 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:31.781 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:31.781 14:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:33.692 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:33.692 00:41:33.692 real 0m23.616s 00:41:33.692 user 0m55.555s 00:41:33.692 sys 0m10.664s 00:41:33.692 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:33.692 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:41:33.692 ************************************ 00:41:33.692 END TEST nvmf_lvol 00:41:33.692 ************************************ 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:33.952 ************************************ 00:41:33.952 START TEST nvmf_lvs_grow 00:41:33.952 ************************************ 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:41:33.952 * Looking for test storage... 00:41:33.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:33.952 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:41:34.212 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:41:34.212 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:41:34.212 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:41:34.212 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:34.212 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:41:34.212 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:41:34.212 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:34.212 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:34.212 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:41:34.212 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:34.212 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:34.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.212 --rc genhtml_branch_coverage=1 00:41:34.212 --rc genhtml_function_coverage=1 00:41:34.212 --rc genhtml_legend=1 00:41:34.212 --rc geninfo_all_blocks=1 00:41:34.212 --rc geninfo_unexecuted_blocks=1 00:41:34.212 00:41:34.212 ' 00:41:34.212 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:34.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.212 --rc genhtml_branch_coverage=1 00:41:34.212 --rc genhtml_function_coverage=1 00:41:34.212 --rc genhtml_legend=1 00:41:34.212 --rc geninfo_all_blocks=1 00:41:34.212 --rc geninfo_unexecuted_blocks=1 00:41:34.212 00:41:34.212 ' 00:41:34.212 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:34.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.212 --rc genhtml_branch_coverage=1 00:41:34.212 --rc genhtml_function_coverage=1 00:41:34.212 --rc genhtml_legend=1 00:41:34.212 --rc geninfo_all_blocks=1 00:41:34.212 --rc geninfo_unexecuted_blocks=1 00:41:34.212 00:41:34.212 ' 00:41:34.212 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:34.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.212 --rc genhtml_branch_coverage=1 00:41:34.212 --rc genhtml_function_coverage=1 00:41:34.212 --rc genhtml_legend=1 00:41:34.212 --rc geninfo_all_blocks=1 00:41:34.212 --rc geninfo_unexecuted_blocks=1 00:41:34.212 00:41:34.213 ' 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:41:34.213 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:42.357 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:42.357 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:42.357 Found net devices under 0000:31:00.0: cvl_0_0 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:42.357 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:42.358 Found net devices under 0000:31:00.1: cvl_0_1 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:42.358 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:42.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:42.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:41:42.358 00:41:42.358 --- 10.0.0.2 ping statistics --- 00:41:42.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:42.358 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:42.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:42.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:41:42.358 00:41:42.358 --- 10.0.0.1 ping statistics --- 00:41:42.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:42.358 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=2031595 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 2031595 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2031595 ']' 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:42.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:42.358 14:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:42.358 [2024-10-13 14:37:45.237239] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:42.358 [2024-10-13 14:37:45.238206] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:41:42.358 [2024-10-13 14:37:45.238241] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:42.358 [2024-10-13 14:37:45.374371] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:42.358 [2024-10-13 14:37:45.421730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:42.358 [2024-10-13 14:37:45.438966] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:42.358 [2024-10-13 14:37:45.438997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:42.358 [2024-10-13 14:37:45.439005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:42.358 [2024-10-13 14:37:45.439012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:42.358 [2024-10-13 14:37:45.439018] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:42.358 [2024-10-13 14:37:45.439587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:42.358 [2024-10-13 14:37:45.487601] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:42.358 [2024-10-13 14:37:45.487852] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:42.358 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:42.358 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:41:42.358 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:42.358 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:42.358 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:42.358 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:42.358 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:42.619 [2024-10-13 14:37:46.204397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:42.619 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:41:42.619 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:42.619 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:42.619 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:42.619 ************************************ 00:41:42.619 START TEST lvs_grow_clean 00:41:42.619 ************************************ 00:41:42.619 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:41:42.619 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:41:42.619 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:41:42.619 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:41:42.619 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:41:42.619 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:41:42.619 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:41:42.619 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:42.619 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:42.619 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:42.880 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:41:42.880 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:41:43.140 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=002c9f51-f870-400e-9cc8-bf2419341d2d 00:41:43.140 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 002c9f51-f870-400e-9cc8-bf2419341d2d 00:41:43.140 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:41:43.140 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:41:43.140 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:41:43.400 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 002c9f51-f870-400e-9cc8-bf2419341d2d lvol 150 00:41:43.400 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2e87c2e9-1a13-450d-933e-13e638ac6d00 00:41:43.400 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:43.400 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:41:43.661 [2024-10-13 14:37:47.192126] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:41:43.661 [2024-10-13 14:37:47.192289] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:41:43.661 true 00:41:43.661 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 002c9f51-f870-400e-9cc8-bf2419341d2d 00:41:43.661 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:41:43.922 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:41:43.922 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:43.922 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2e87c2e9-1a13-450d-933e-13e638ac6d00 00:41:44.182 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:44.182 [2024-10-13 14:37:47.868635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:44.182 14:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:44.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2032301 00:41:44.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:44.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:41:44.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2032301 /var/tmp/bdevperf.sock 00:41:44.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2032301 ']' 00:41:44.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:44.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:44.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:44.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:44.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:44.441 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:41:44.441 [2024-10-13 14:37:48.109257] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:41:44.441 [2024-10-13 14:37:48.109318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2032301 ] 00:41:44.701 [2024-10-13 14:37:48.240664] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:44.701 [2024-10-13 14:37:48.288679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:44.701 [2024-10-13 14:37:48.306964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:45.272 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:45.272 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:41:45.272 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:41:45.533 Nvme0n1 00:41:45.533 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:41:45.794 [ 00:41:45.794 { 00:41:45.794 "name": "Nvme0n1", 00:41:45.794 "aliases": [ 00:41:45.794 "2e87c2e9-1a13-450d-933e-13e638ac6d00" 00:41:45.794 ], 00:41:45.794 "product_name": "NVMe disk", 00:41:45.794 "block_size": 4096, 00:41:45.794 "num_blocks": 38912, 00:41:45.794 "uuid": "2e87c2e9-1a13-450d-933e-13e638ac6d00", 00:41:45.794 "numa_id": 0, 00:41:45.794 "assigned_rate_limits": { 00:41:45.794 "rw_ios_per_sec": 0, 00:41:45.794 "rw_mbytes_per_sec": 0, 00:41:45.794 "r_mbytes_per_sec": 0, 00:41:45.794 "w_mbytes_per_sec": 0 00:41:45.794 }, 00:41:45.794 "claimed": false, 00:41:45.794 "zoned": false, 00:41:45.794 "supported_io_types": { 00:41:45.794 "read": true, 00:41:45.794 "write": true, 00:41:45.794 "unmap": true, 00:41:45.794 "flush": true, 00:41:45.794 "reset": true, 00:41:45.794 "nvme_admin": true, 00:41:45.794 "nvme_io": true, 00:41:45.794 "nvme_io_md": false, 00:41:45.794 "write_zeroes": true, 00:41:45.794 "zcopy": false, 00:41:45.794 "get_zone_info": false, 00:41:45.794 "zone_management": false, 00:41:45.794 "zone_append": false, 00:41:45.794 "compare": true, 00:41:45.794 "compare_and_write": true, 00:41:45.794 "abort": true, 00:41:45.794 "seek_hole": false, 00:41:45.794 "seek_data": false, 00:41:45.794 "copy": true, 00:41:45.794 "nvme_iov_md": false 00:41:45.794 }, 00:41:45.794 "memory_domains": [ 00:41:45.794 { 00:41:45.794 "dma_device_id": "system", 00:41:45.794 "dma_device_type": 1 00:41:45.794 } 00:41:45.794 ], 00:41:45.794 "driver_specific": { 00:41:45.794 "nvme": [ 00:41:45.794 { 00:41:45.794 "trid": { 00:41:45.794 "trtype": "TCP", 00:41:45.794 "adrfam": "IPv4", 00:41:45.794 "traddr": "10.0.0.2", 00:41:45.794 "trsvcid": "4420", 00:41:45.794 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:45.794 }, 00:41:45.794 "ctrlr_data": { 00:41:45.794 "cntlid": 1, 00:41:45.794 "vendor_id": "0x8086", 00:41:45.794 "model_number": "SPDK bdev Controller", 00:41:45.794 "serial_number": "SPDK0", 00:41:45.794 "firmware_revision": "25.01", 00:41:45.794 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:45.794 "oacs": { 00:41:45.794 "security": 0, 00:41:45.794 "format": 0, 00:41:45.794 "firmware": 0, 00:41:45.794 "ns_manage": 0 00:41:45.794 }, 00:41:45.794 "multi_ctrlr": true, 00:41:45.794 "ana_reporting": false 00:41:45.794 }, 00:41:45.794 "vs": { 00:41:45.794 "nvme_version": "1.3" 00:41:45.794 }, 00:41:45.794 "ns_data": { 00:41:45.794 "id": 1, 00:41:45.795 "can_share": true 00:41:45.795 } 00:41:45.795 } 00:41:45.795 ], 00:41:45.795 "mp_policy": "active_passive" 00:41:45.795 } 00:41:45.795 } 00:41:45.795 ] 00:41:45.795 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2032408 00:41:45.795 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:41:45.795 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:45.795 Running I/O for 10 seconds... 00:41:47.178 Latency(us) 00:41:47.178 [2024-10-13T12:37:50.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:47.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:47.178 Nvme0n1 : 1.00 17014.00 66.46 0.00 0.00 0.00 0.00 0.00 00:41:47.178 [2024-10-13T12:37:50.885Z] =================================================================================================================== 00:41:47.178 [2024-10-13T12:37:50.885Z] Total : 17014.00 66.46 0.00 0.00 0.00 0.00 0.00 00:41:47.178 00:41:47.748 14:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 002c9f51-f870-400e-9cc8-bf2419341d2d 00:41:48.009 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:48.009 Nvme0n1 : 2.00 17435.50 68.11 0.00 0.00 0.00 0.00 0.00 00:41:48.009 [2024-10-13T12:37:51.716Z] =================================================================================================================== 00:41:48.009 [2024-10-13T12:37:51.716Z] Total : 17435.50 68.11 0.00 0.00 0.00 0.00 0.00 00:41:48.009 00:41:48.009 true 00:41:48.009 14:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 002c9f51-f870-400e-9cc8-bf2419341d2d 00:41:48.009 14:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:48.270 14:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:48.270 14:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:48.270 14:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2032408 00:41:48.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:48.841 Nvme0n1 : 3.00 17469.00 68.24 0.00 0.00 0.00 0.00 0.00 00:41:48.841 [2024-10-13T12:37:52.548Z] =================================================================================================================== 00:41:48.841 [2024-10-13T12:37:52.548Z] Total : 17469.00 68.24 0.00 0.00 0.00 0.00 0.00 00:41:48.841 00:41:49.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:49.781 Nvme0n1 : 4.00 17597.25 68.74 0.00 0.00 0.00 0.00 0.00 00:41:49.781 [2024-10-13T12:37:53.488Z] =================================================================================================================== 00:41:49.781 [2024-10-13T12:37:53.488Z] Total : 17597.25 68.74 0.00 0.00 0.00 0.00 0.00 00:41:49.781 00:41:51.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:51.164 Nvme0n1 : 5.00 18941.80 73.99 0.00 0.00 0.00 0.00 0.00 00:41:51.164 [2024-10-13T12:37:54.871Z] =================================================================================================================== 00:41:51.164 [2024-10-13T12:37:54.871Z] Total : 18941.80 73.99 0.00 0.00 0.00 0.00 0.00 00:41:51.164 00:41:52.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:52.105 Nvme0n1 : 6.00 20019.33 78.20 0.00 0.00 0.00 0.00 0.00 00:41:52.105 [2024-10-13T12:37:55.812Z] =================================================================================================================== 00:41:52.105 [2024-10-13T12:37:55.812Z] Total : 20019.33 78.20 0.00 0.00 0.00 0.00 0.00 00:41:52.105 00:41:53.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:53.045 Nvme0n1 : 7.00 20798.29 81.24 0.00 0.00 0.00 0.00 0.00 00:41:53.045 [2024-10-13T12:37:56.752Z] =================================================================================================================== 00:41:53.045 [2024-10-13T12:37:56.752Z] Total : 20798.29 81.24 0.00 0.00 0.00 0.00 0.00 00:41:53.045 00:41:53.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:53.988 Nvme0n1 : 8.00 21374.62 83.49 0.00 0.00 0.00 0.00 0.00 00:41:53.988 [2024-10-13T12:37:57.695Z] =================================================================================================================== 00:41:53.988 [2024-10-13T12:37:57.695Z] Total : 21374.62 83.49 0.00 0.00 0.00 0.00 0.00 00:41:53.988 00:41:54.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:54.930 Nvme0n1 : 9.00 21831.67 85.28 0.00 0.00 0.00 0.00 0.00 00:41:54.930 [2024-10-13T12:37:58.637Z] =================================================================================================================== 00:41:54.930 [2024-10-13T12:37:58.637Z] Total : 21831.67 85.28 0.00 0.00 0.00 0.00 0.00 00:41:54.930 00:41:55.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:55.871 Nvme0n1 : 10.00 22200.40 86.72 0.00 0.00 0.00 0.00 0.00 00:41:55.871 [2024-10-13T12:37:59.578Z] =================================================================================================================== 00:41:55.871 [2024-10-13T12:37:59.578Z] Total : 22200.40 86.72 0.00 0.00 0.00 0.00 0.00 00:41:55.871 00:41:55.871 00:41:55.871 Latency(us) 00:41:55.871 [2024-10-13T12:37:59.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:55.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:55.871 Nvme0n1 : 10.00 22198.55 86.71 0.00 0.00 5762.81 3831.87 31311.89 00:41:55.871 [2024-10-13T12:37:59.578Z] =================================================================================================================== 00:41:55.871 [2024-10-13T12:37:59.578Z] Total : 22198.55 86.71 0.00 0.00 5762.81 3831.87 31311.89 00:41:55.871 { 00:41:55.871 "results": [ 00:41:55.871 { 00:41:55.871 "job": "Nvme0n1", 00:41:55.871 "core_mask": "0x2", 00:41:55.871 "workload": "randwrite", 00:41:55.871 "status": "finished", 00:41:55.871 "queue_depth": 128, 00:41:55.871 "io_size": 4096, 00:41:55.871 "runtime": 10.003673, 00:41:55.871 "iops": 22198.546473880146, 00:41:55.871 "mibps": 86.71307216359432, 00:41:55.871 "io_failed": 0, 00:41:55.871 "io_timeout": 0, 00:41:55.871 "avg_latency_us": 5762.805408157617, 00:41:55.871 "min_latency_us": 3831.874373538256, 00:41:55.871 "max_latency_us": 31311.887738055462 00:41:55.871 } 00:41:55.871 ], 00:41:55.871 "core_count": 1 00:41:55.871 } 00:41:55.871 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2032301 00:41:55.871 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2032301 ']' 00:41:55.871 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2032301 00:41:55.871 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:41:55.871 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:55.871 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2032301 00:41:55.871 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:55.871 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:55.871 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2032301' 00:41:55.871 killing process with pid 2032301 00:41:55.871 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2032301 00:41:55.871 Received shutdown signal, test time was about 10.000000 seconds 00:41:55.871 00:41:55.871 Latency(us) 00:41:55.871 [2024-10-13T12:37:59.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:55.871 [2024-10-13T12:37:59.578Z] =================================================================================================================== 00:41:55.871 [2024-10-13T12:37:59.578Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:55.871 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2032301 00:41:56.132 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:56.133 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:56.393 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 002c9f51-f870-400e-9cc8-bf2419341d2d 00:41:56.393 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:56.654 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:56.654 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:41:56.654 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:56.915 [2024-10-13 14:38:00.364137] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:56.915 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 002c9f51-f870-400e-9cc8-bf2419341d2d 00:41:56.915 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:41:56.915 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 002c9f51-f870-400e-9cc8-bf2419341d2d 00:41:56.915 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:56.915 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:56.915 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:56.915 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:56.915 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:56.915 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:56.915 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:56.915 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:56.915 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 002c9f51-f870-400e-9cc8-bf2419341d2d 00:41:56.915 request: 00:41:56.915 { 00:41:56.915 "uuid": "002c9f51-f870-400e-9cc8-bf2419341d2d", 00:41:56.915 "method": "bdev_lvol_get_lvstores", 00:41:56.915 "req_id": 1 00:41:56.915 } 00:41:56.915 Got JSON-RPC error response 00:41:56.915 response: 00:41:56.915 { 00:41:56.915 "code": -19, 00:41:56.915 "message": "No such device" 00:41:56.915 } 00:41:56.915 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:41:56.915 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:56.915 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:56.915 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:56.915 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:57.176 aio_bdev 00:41:57.176 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2e87c2e9-1a13-450d-933e-13e638ac6d00 00:41:57.176 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=2e87c2e9-1a13-450d-933e-13e638ac6d00 00:41:57.176 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:41:57.176 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:41:57.176 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:41:57.176 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:41:57.176 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:57.437 14:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2e87c2e9-1a13-450d-933e-13e638ac6d00 -t 2000 00:41:57.437 [ 00:41:57.437 { 00:41:57.437 "name": "2e87c2e9-1a13-450d-933e-13e638ac6d00", 00:41:57.437 "aliases": [ 00:41:57.437 "lvs/lvol" 00:41:57.437 ], 00:41:57.437 "product_name": "Logical Volume", 00:41:57.437 "block_size": 4096, 00:41:57.437 "num_blocks": 38912, 00:41:57.437 "uuid": "2e87c2e9-1a13-450d-933e-13e638ac6d00", 00:41:57.437 "assigned_rate_limits": { 00:41:57.437 "rw_ios_per_sec": 0, 00:41:57.437 "rw_mbytes_per_sec": 0, 00:41:57.437 "r_mbytes_per_sec": 0, 00:41:57.437 "w_mbytes_per_sec": 0 00:41:57.437 }, 00:41:57.437 "claimed": false, 00:41:57.437 "zoned": false, 00:41:57.437 "supported_io_types": { 00:41:57.437 "read": true, 00:41:57.437 "write": true, 00:41:57.437 "unmap": true, 00:41:57.437 "flush": false, 00:41:57.437 "reset": true, 00:41:57.437 "nvme_admin": false, 00:41:57.437 "nvme_io": false, 00:41:57.437 "nvme_io_md": false, 00:41:57.437 "write_zeroes": true, 00:41:57.437 "zcopy": false, 00:41:57.437 "get_zone_info": false, 00:41:57.437 "zone_management": false, 00:41:57.437 "zone_append": false, 00:41:57.437 "compare": false, 00:41:57.437 "compare_and_write": false, 00:41:57.437 "abort": false, 00:41:57.437 "seek_hole": true, 00:41:57.437 "seek_data": true, 00:41:57.437 "copy": false, 00:41:57.437 "nvme_iov_md": false 00:41:57.437 }, 00:41:57.437 "driver_specific": { 00:41:57.437 "lvol": { 00:41:57.437 "lvol_store_uuid": "002c9f51-f870-400e-9cc8-bf2419341d2d", 00:41:57.437 "base_bdev": "aio_bdev", 00:41:57.437 "thin_provision": false, 00:41:57.437 "num_allocated_clusters": 38, 00:41:57.437 "snapshot": false, 00:41:57.437 "clone": false, 00:41:57.437 "esnap_clone": false 00:41:57.437 } 00:41:57.437 } 00:41:57.437 } 00:41:57.437 ] 00:41:57.437 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:41:57.437 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 002c9f51-f870-400e-9cc8-bf2419341d2d 00:41:57.437 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:57.698 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:57.698 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 002c9f51-f870-400e-9cc8-bf2419341d2d 00:41:57.698 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:57.959 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:57.959 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2e87c2e9-1a13-450d-933e-13e638ac6d00 00:41:57.959 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 002c9f51-f870-400e-9cc8-bf2419341d2d 00:41:58.220 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:58.481 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:58.481 00:41:58.481 real 0m15.727s 00:41:58.481 user 0m15.293s 00:41:58.481 sys 0m1.415s 00:41:58.481 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:58.481 14:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:41:58.481 ************************************ 00:41:58.481 END TEST lvs_grow_clean 00:41:58.481 ************************************ 00:41:58.481 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:41:58.481 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:58.481 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:58.481 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:58.481 ************************************ 00:41:58.481 START TEST lvs_grow_dirty 00:41:58.481 ************************************ 00:41:58.481 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:41:58.481 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:41:58.481 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:41:58.481 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:41:58.481 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:41:58.481 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:41:58.482 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:41:58.482 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:58.482 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:58.482 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:58.743 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:41:58.743 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:41:59.004 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d5da0989-b67b-4214-a561-6ad9b93fae66 00:41:59.004 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5da0989-b67b-4214-a561-6ad9b93fae66 00:41:59.004 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:41:59.004 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:41:59.004 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:41:59.004 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d5da0989-b67b-4214-a561-6ad9b93fae66 lvol 150 00:41:59.265 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=daf41313-eb53-4fde-821f-66f912c7730c 00:41:59.265 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:59.265 14:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:41:59.525 [2024-10-13 14:38:03.000030] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:41:59.525 [2024-10-13 14:38:03.000209] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:41:59.525 true 00:41:59.525 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:41:59.525 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5da0989-b67b-4214-a561-6ad9b93fae66 00:41:59.525 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:41:59.525 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:59.787 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 daf41313-eb53-4fde-821f-66f912c7730c 00:42:00.048 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:00.048 [2024-10-13 14:38:03.660560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:00.048 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:00.309 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2035120 00:42:00.309 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:00.309 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:42:00.309 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2035120 /var/tmp/bdevperf.sock 00:42:00.309 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2035120 ']' 00:42:00.309 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:00.309 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:00.309 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:00.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:00.309 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:00.309 14:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:00.309 [2024-10-13 14:38:03.894027] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:42:00.309 [2024-10-13 14:38:03.894087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2035120 ] 00:42:00.570 [2024-10-13 14:38:04.024597] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:00.570 [2024-10-13 14:38:04.070504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:00.570 [2024-10-13 14:38:04.087199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:01.141 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:01.141 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:42:01.141 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:42:01.408 Nvme0n1 00:42:01.408 14:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:42:01.408 [ 00:42:01.408 { 00:42:01.408 "name": "Nvme0n1", 00:42:01.408 "aliases": [ 00:42:01.408 "daf41313-eb53-4fde-821f-66f912c7730c" 00:42:01.408 ], 00:42:01.408 "product_name": "NVMe disk", 00:42:01.408 "block_size": 4096, 00:42:01.408 "num_blocks": 38912, 00:42:01.408 "uuid": "daf41313-eb53-4fde-821f-66f912c7730c", 00:42:01.408 "numa_id": 0, 00:42:01.408 "assigned_rate_limits": { 00:42:01.408 "rw_ios_per_sec": 0, 00:42:01.408 "rw_mbytes_per_sec": 0, 00:42:01.408 "r_mbytes_per_sec": 0, 00:42:01.408 "w_mbytes_per_sec": 0 00:42:01.408 }, 00:42:01.408 "claimed": false, 00:42:01.408 "zoned": false, 00:42:01.408 "supported_io_types": { 00:42:01.408 "read": true, 00:42:01.408 "write": true, 00:42:01.408 "unmap": true, 00:42:01.408 "flush": true, 00:42:01.408 "reset": true, 00:42:01.408 "nvme_admin": true, 00:42:01.408 "nvme_io": true, 00:42:01.408 "nvme_io_md": false, 00:42:01.408 "write_zeroes": true, 00:42:01.408 "zcopy": false, 00:42:01.408 "get_zone_info": false, 00:42:01.408 "zone_management": false, 00:42:01.408 "zone_append": false, 00:42:01.408 "compare": true, 00:42:01.408 "compare_and_write": true, 00:42:01.408 "abort": true, 00:42:01.408 "seek_hole": false, 00:42:01.408 "seek_data": false, 00:42:01.408 "copy": true, 00:42:01.408 "nvme_iov_md": false 00:42:01.408 }, 00:42:01.408 "memory_domains": [ 00:42:01.408 { 00:42:01.408 "dma_device_id": "system", 00:42:01.408 "dma_device_type": 1 00:42:01.408 } 00:42:01.408 ], 00:42:01.408 "driver_specific": { 00:42:01.408 "nvme": [ 00:42:01.408 { 00:42:01.408 "trid": { 00:42:01.408 "trtype": "TCP", 00:42:01.408 "adrfam": "IPv4", 00:42:01.408 "traddr": "10.0.0.2", 00:42:01.408 "trsvcid": "4420", 00:42:01.408 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:42:01.408 }, 00:42:01.408 "ctrlr_data": { 00:42:01.408 "cntlid": 1, 00:42:01.408 "vendor_id": "0x8086", 00:42:01.408 "model_number": "SPDK bdev Controller", 00:42:01.408 "serial_number": "SPDK0", 00:42:01.408 "firmware_revision": "25.01", 00:42:01.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:01.408 "oacs": { 00:42:01.408 "security": 0, 00:42:01.408 "format": 0, 00:42:01.408 "firmware": 0, 00:42:01.408 "ns_manage": 0 00:42:01.408 }, 00:42:01.408 "multi_ctrlr": true, 00:42:01.408 "ana_reporting": false 00:42:01.408 }, 00:42:01.408 "vs": { 00:42:01.408 "nvme_version": "1.3" 00:42:01.408 }, 00:42:01.408 "ns_data": { 00:42:01.408 "id": 1, 00:42:01.408 "can_share": true 00:42:01.408 } 00:42:01.408 } 00:42:01.408 ], 00:42:01.408 "mp_policy": "active_passive" 00:42:01.408 } 00:42:01.408 } 00:42:01.408 ] 00:42:01.408 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2035391 00:42:01.408 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:42:01.408 14:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:42:01.668 Running I/O for 10 seconds... 00:42:02.610 Latency(us) 00:42:02.610 [2024-10-13T12:38:06.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:02.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:02.610 Nvme0n1 : 1.00 17417.00 68.04 0.00 0.00 0.00 0.00 0.00 00:42:02.610 [2024-10-13T12:38:06.317Z] =================================================================================================================== 00:42:02.610 [2024-10-13T12:38:06.317Z] Total : 17417.00 68.04 0.00 0.00 0.00 0.00 0.00 00:42:02.610 00:42:03.550 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d5da0989-b67b-4214-a561-6ad9b93fae66 00:42:03.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:03.550 Nvme0n1 : 2.00 17668.00 69.02 0.00 0.00 0.00 0.00 0.00 00:42:03.550 [2024-10-13T12:38:07.257Z] =================================================================================================================== 00:42:03.550 [2024-10-13T12:38:07.257Z] Total : 17668.00 69.02 0.00 0.00 0.00 0.00 0.00 00:42:03.550 00:42:03.811 true 00:42:03.811 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5da0989-b67b-4214-a561-6ad9b93fae66 00:42:03.811 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:42:03.811 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:42:03.811 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:42:03.811 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2035391 00:42:04.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:04.766 Nvme0n1 : 3.00 17752.33 69.35 0.00 0.00 0.00 0.00 0.00 00:42:04.766 [2024-10-13T12:38:08.473Z] =================================================================================================================== 00:42:04.766 [2024-10-13T12:38:08.473Z] Total : 17752.33 69.35 0.00 0.00 0.00 0.00 0.00 00:42:04.766 00:42:05.709 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:05.709 Nvme0n1 : 4.00 17794.25 69.51 0.00 0.00 0.00 0.00 0.00 00:42:05.709 [2024-10-13T12:38:09.416Z] =================================================================================================================== 00:42:05.709 [2024-10-13T12:38:09.416Z] Total : 17794.25 69.51 0.00 0.00 0.00 0.00 0.00 00:42:05.709 00:42:06.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:06.652 Nvme0n1 : 5.00 18382.60 71.81 0.00 0.00 0.00 0.00 0.00 00:42:06.652 [2024-10-13T12:38:10.359Z] =================================================================================================================== 00:42:06.652 [2024-10-13T12:38:10.359Z] Total : 18382.60 71.81 0.00 0.00 0.00 0.00 0.00 00:42:06.652 00:42:07.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:07.595 Nvme0n1 : 6.00 19552.67 76.38 0.00 0.00 0.00 0.00 0.00 00:42:07.595 [2024-10-13T12:38:11.302Z] =================================================================================================================== 00:42:07.595 [2024-10-13T12:38:11.302Z] Total : 19552.67 76.38 0.00 0.00 0.00 0.00 0.00 00:42:07.595 00:42:08.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:08.617 Nvme0n1 : 7.00 20397.43 79.68 0.00 0.00 0.00 0.00 0.00 00:42:08.617 [2024-10-13T12:38:12.324Z] =================================================================================================================== 00:42:08.617 [2024-10-13T12:38:12.324Z] Total : 20397.43 79.68 0.00 0.00 0.00 0.00 0.00 00:42:08.617 00:42:09.559 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:09.559 Nvme0n1 : 8.00 21031.75 82.16 0.00 0.00 0.00 0.00 0.00 00:42:09.559 [2024-10-13T12:38:13.266Z] =================================================================================================================== 00:42:09.559 [2024-10-13T12:38:13.266Z] Total : 21031.75 82.16 0.00 0.00 0.00 0.00 0.00 00:42:09.559 00:42:10.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:10.499 Nvme0n1 : 9.00 21525.11 84.08 0.00 0.00 0.00 0.00 0.00 00:42:10.499 [2024-10-13T12:38:14.206Z] =================================================================================================================== 00:42:10.499 [2024-10-13T12:38:14.206Z] Total : 21525.11 84.08 0.00 0.00 0.00 0.00 0.00 00:42:10.499 00:42:11.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:11.882 Nvme0n1 : 10.00 21919.80 85.62 0.00 0.00 0.00 0.00 0.00 00:42:11.882 [2024-10-13T12:38:15.589Z] =================================================================================================================== 00:42:11.882 [2024-10-13T12:38:15.589Z] Total : 21919.80 85.62 0.00 0.00 0.00 0.00 0.00 00:42:11.882 00:42:11.882 00:42:11.882 Latency(us) 00:42:11.882 [2024-10-13T12:38:15.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:11.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:11.882 Nvme0n1 : 10.00 21919.37 85.62 0.00 0.00 5836.33 3079.18 28684.32 00:42:11.882 [2024-10-13T12:38:15.589Z] =================================================================================================================== 00:42:11.882 [2024-10-13T12:38:15.589Z] Total : 21919.37 85.62 0.00 0.00 5836.33 3079.18 28684.32 00:42:11.882 { 00:42:11.882 "results": [ 00:42:11.882 { 00:42:11.882 "job": "Nvme0n1", 00:42:11.882 "core_mask": "0x2", 00:42:11.882 "workload": "randwrite", 00:42:11.882 "status": "finished", 00:42:11.882 "queue_depth": 128, 00:42:11.882 "io_size": 4096, 00:42:11.882 "runtime": 10.003163, 00:42:11.882 "iops": 21919.366904248185, 00:42:11.882 "mibps": 85.62252696971947, 00:42:11.882 "io_failed": 0, 00:42:11.882 "io_timeout": 0, 00:42:11.882 "avg_latency_us": 5836.333127147465, 00:42:11.882 "min_latency_us": 3079.184764450384, 00:42:11.882 "max_latency_us": 28684.316739057802 00:42:11.882 } 00:42:11.882 ], 00:42:11.882 "core_count": 1 00:42:11.882 } 00:42:11.882 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2035120 00:42:11.882 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2035120 ']' 00:42:11.882 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2035120 00:42:11.882 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:42:11.882 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:11.882 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2035120 00:42:11.882 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:11.882 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:11.882 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2035120' 00:42:11.882 killing process with pid 2035120 00:42:11.882 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2035120 00:42:11.882 Received shutdown signal, test time was about 10.000000 seconds 00:42:11.882 00:42:11.882 Latency(us) 00:42:11.882 [2024-10-13T12:38:15.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:11.882 [2024-10-13T12:38:15.589Z] =================================================================================================================== 00:42:11.882 [2024-10-13T12:38:15.589Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:11.882 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2035120 00:42:11.882 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:11.882 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:12.142 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5da0989-b67b-4214-a561-6ad9b93fae66 00:42:12.142 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:42:12.403 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:42:12.403 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:42:12.403 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2031595 00:42:12.403 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2031595 00:42:12.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2031595 Killed "${NVMF_APP[@]}" "$@" 00:42:12.403 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:42:12.403 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:42:12.403 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:42:12.403 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:12.404 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:12.404 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=2037414 00:42:12.404 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 2037414 00:42:12.404 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:42:12.404 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2037414 ']' 00:42:12.404 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:12.404 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:12.404 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:12.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:12.404 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:12.404 14:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:12.404 [2024-10-13 14:38:15.981210] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:12.404 [2024-10-13 14:38:15.982210] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:42:12.404 [2024-10-13 14:38:15.982253] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:12.664 [2024-10-13 14:38:16.121614] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:12.664 [2024-10-13 14:38:16.169403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:12.664 [2024-10-13 14:38:16.184580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:12.664 [2024-10-13 14:38:16.184606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:12.664 [2024-10-13 14:38:16.184612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:12.664 [2024-10-13 14:38:16.184616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:12.664 [2024-10-13 14:38:16.184621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:12.664 [2024-10-13 14:38:16.185048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:12.664 [2024-10-13 14:38:16.229282] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:12.664 [2024-10-13 14:38:16.229477] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:13.233 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:13.233 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:42:13.233 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:42:13.233 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:13.233 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:13.233 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:13.234 14:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:13.493 [2024-10-13 14:38:16.975031] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:42:13.493 [2024-10-13 14:38:16.975267] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:42:13.493 [2024-10-13 14:38:16.975353] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:42:13.493 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:42:13.493 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev daf41313-eb53-4fde-821f-66f912c7730c 00:42:13.493 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=daf41313-eb53-4fde-821f-66f912c7730c 00:42:13.493 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:42:13.493 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:42:13.493 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:42:13.493 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:42:13.493 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:13.493 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b daf41313-eb53-4fde-821f-66f912c7730c -t 2000 00:42:13.753 [ 00:42:13.753 { 00:42:13.753 "name": "daf41313-eb53-4fde-821f-66f912c7730c", 00:42:13.753 "aliases": [ 00:42:13.753 "lvs/lvol" 00:42:13.753 ], 00:42:13.753 "product_name": "Logical Volume", 00:42:13.753 "block_size": 4096, 00:42:13.753 "num_blocks": 38912, 00:42:13.753 "uuid": "daf41313-eb53-4fde-821f-66f912c7730c", 00:42:13.753 "assigned_rate_limits": { 00:42:13.753 "rw_ios_per_sec": 0, 00:42:13.753 "rw_mbytes_per_sec": 0, 00:42:13.753 "r_mbytes_per_sec": 0, 00:42:13.753 "w_mbytes_per_sec": 0 00:42:13.753 }, 00:42:13.753 "claimed": false, 00:42:13.753 "zoned": false, 00:42:13.753 "supported_io_types": { 00:42:13.753 "read": true, 00:42:13.753 "write": true, 00:42:13.753 "unmap": true, 00:42:13.753 "flush": false, 00:42:13.753 "reset": true, 00:42:13.753 "nvme_admin": false, 00:42:13.753 "nvme_io": false, 00:42:13.753 "nvme_io_md": false, 00:42:13.753 "write_zeroes": true, 00:42:13.753 "zcopy": false, 00:42:13.753 "get_zone_info": false, 00:42:13.753 "zone_management": false, 00:42:13.753 "zone_append": false, 00:42:13.753 "compare": false, 00:42:13.753 "compare_and_write": false, 00:42:13.753 "abort": false, 00:42:13.753 "seek_hole": true, 00:42:13.753 "seek_data": true, 00:42:13.753 "copy": false, 00:42:13.753 "nvme_iov_md": false 00:42:13.753 }, 00:42:13.753 "driver_specific": { 00:42:13.753 "lvol": { 00:42:13.753 "lvol_store_uuid": "d5da0989-b67b-4214-a561-6ad9b93fae66", 00:42:13.753 "base_bdev": "aio_bdev", 00:42:13.753 "thin_provision": false, 00:42:13.753 "num_allocated_clusters": 38, 00:42:13.753 "snapshot": false, 00:42:13.753 "clone": false, 00:42:13.753 "esnap_clone": false 00:42:13.753 } 00:42:13.753 } 00:42:13.753 } 00:42:13.753 ] 00:42:13.753 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:42:13.753 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5da0989-b67b-4214-a561-6ad9b93fae66 00:42:13.753 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:42:14.014 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:42:14.014 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5da0989-b67b-4214-a561-6ad9b93fae66 00:42:14.014 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:42:14.014 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:42:14.014 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:14.274 [2024-10-13 14:38:17.853503] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:42:14.274 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5da0989-b67b-4214-a561-6ad9b93fae66 00:42:14.274 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:42:14.274 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5da0989-b67b-4214-a561-6ad9b93fae66 00:42:14.274 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:14.274 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:14.274 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:14.274 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:14.274 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:14.274 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:14.274 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:14.274 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:42:14.274 14:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5da0989-b67b-4214-a561-6ad9b93fae66 00:42:14.537 request: 00:42:14.537 { 00:42:14.537 "uuid": "d5da0989-b67b-4214-a561-6ad9b93fae66", 00:42:14.537 "method": "bdev_lvol_get_lvstores", 00:42:14.537 "req_id": 1 00:42:14.537 } 00:42:14.537 Got JSON-RPC error response 00:42:14.537 response: 00:42:14.537 { 00:42:14.537 "code": -19, 00:42:14.537 "message": "No such device" 00:42:14.537 } 00:42:14.537 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:42:14.537 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:14.537 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:14.537 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:14.537 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:42:14.537 aio_bdev 00:42:14.798 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev daf41313-eb53-4fde-821f-66f912c7730c 00:42:14.798 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=daf41313-eb53-4fde-821f-66f912c7730c 00:42:14.798 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:42:14.798 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:42:14.798 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:42:14.798 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:42:14.798 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:42:14.798 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b daf41313-eb53-4fde-821f-66f912c7730c -t 2000 00:42:15.058 [ 00:42:15.058 { 00:42:15.058 "name": "daf41313-eb53-4fde-821f-66f912c7730c", 00:42:15.058 "aliases": [ 00:42:15.058 "lvs/lvol" 00:42:15.058 ], 00:42:15.058 "product_name": "Logical Volume", 00:42:15.058 "block_size": 4096, 00:42:15.058 "num_blocks": 38912, 00:42:15.058 "uuid": "daf41313-eb53-4fde-821f-66f912c7730c", 00:42:15.058 "assigned_rate_limits": { 00:42:15.058 "rw_ios_per_sec": 0, 00:42:15.058 "rw_mbytes_per_sec": 0, 00:42:15.058 "r_mbytes_per_sec": 0, 00:42:15.058 "w_mbytes_per_sec": 0 00:42:15.058 }, 00:42:15.058 "claimed": false, 00:42:15.058 "zoned": false, 00:42:15.058 "supported_io_types": { 00:42:15.058 "read": true, 00:42:15.058 "write": true, 00:42:15.058 "unmap": true, 00:42:15.058 "flush": false, 00:42:15.058 "reset": true, 00:42:15.058 "nvme_admin": false, 00:42:15.058 "nvme_io": false, 00:42:15.058 "nvme_io_md": false, 00:42:15.058 "write_zeroes": true, 00:42:15.058 "zcopy": false, 00:42:15.058 "get_zone_info": false, 00:42:15.058 "zone_management": false, 00:42:15.058 "zone_append": false, 00:42:15.058 "compare": false, 00:42:15.058 "compare_and_write": false, 00:42:15.058 "abort": false, 00:42:15.058 "seek_hole": true, 00:42:15.058 "seek_data": true, 00:42:15.058 "copy": false, 00:42:15.058 "nvme_iov_md": false 00:42:15.058 }, 00:42:15.058 "driver_specific": { 00:42:15.058 "lvol": { 00:42:15.058 "lvol_store_uuid": "d5da0989-b67b-4214-a561-6ad9b93fae66", 00:42:15.058 "base_bdev": "aio_bdev", 00:42:15.058 "thin_provision": false, 00:42:15.058 "num_allocated_clusters": 38, 00:42:15.058 "snapshot": false, 00:42:15.058 "clone": false, 00:42:15.058 "esnap_clone": false 00:42:15.058 } 00:42:15.058 } 00:42:15.058 } 00:42:15.058 ] 00:42:15.058 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:42:15.058 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5da0989-b67b-4214-a561-6ad9b93fae66 00:42:15.058 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:42:15.319 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:42:15.319 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5da0989-b67b-4214-a561-6ad9b93fae66 00:42:15.319 14:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:42:15.580 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:42:15.580 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete daf41313-eb53-4fde-821f-66f912c7730c 00:42:15.580 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d5da0989-b67b-4214-a561-6ad9b93fae66 00:42:15.840 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:42:16.101 00:42:16.101 real 0m17.508s 00:42:16.101 user 0m35.280s 00:42:16.101 sys 0m2.971s 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:42:16.101 ************************************ 00:42:16.101 END TEST lvs_grow_dirty 00:42:16.101 ************************************ 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:42:16.101 nvmf_trace.0 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:16.101 rmmod nvme_tcp 00:42:16.101 rmmod nvme_fabrics 00:42:16.101 rmmod nvme_keyring 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 2037414 ']' 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 2037414 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2037414 ']' 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2037414 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:16.101 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2037414 00:42:16.362 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:16.362 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:16.362 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2037414' 00:42:16.362 killing process with pid 2037414 00:42:16.362 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2037414 00:42:16.362 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2037414 00:42:16.362 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:16.362 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:16.362 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:16.362 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:42:16.362 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:42:16.362 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:16.362 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:42:16.362 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:16.362 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:16.362 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:16.362 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:16.362 14:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:18.908 00:42:18.908 real 0m44.535s 00:42:18.908 user 0m53.519s 00:42:18.908 sys 0m10.446s 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:42:18.908 ************************************ 00:42:18.908 END TEST nvmf_lvs_grow 00:42:18.908 ************************************ 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:18.908 ************************************ 00:42:18.908 START TEST nvmf_bdev_io_wait 00:42:18.908 ************************************ 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:42:18.908 * Looking for test storage... 00:42:18.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:42:18.908 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:18.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:18.909 --rc genhtml_branch_coverage=1 00:42:18.909 --rc genhtml_function_coverage=1 00:42:18.909 --rc genhtml_legend=1 00:42:18.909 --rc geninfo_all_blocks=1 00:42:18.909 --rc geninfo_unexecuted_blocks=1 00:42:18.909 00:42:18.909 ' 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:18.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:18.909 --rc genhtml_branch_coverage=1 00:42:18.909 --rc genhtml_function_coverage=1 00:42:18.909 --rc genhtml_legend=1 00:42:18.909 --rc geninfo_all_blocks=1 00:42:18.909 --rc geninfo_unexecuted_blocks=1 00:42:18.909 00:42:18.909 ' 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:18.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:18.909 --rc genhtml_branch_coverage=1 00:42:18.909 --rc genhtml_function_coverage=1 00:42:18.909 --rc genhtml_legend=1 00:42:18.909 --rc geninfo_all_blocks=1 00:42:18.909 --rc geninfo_unexecuted_blocks=1 00:42:18.909 00:42:18.909 ' 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:18.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:18.909 --rc genhtml_branch_coverage=1 00:42:18.909 --rc genhtml_function_coverage=1 00:42:18.909 --rc genhtml_legend=1 00:42:18.909 --rc geninfo_all_blocks=1 00:42:18.909 --rc geninfo_unexecuted_blocks=1 00:42:18.909 00:42:18.909 ' 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:42:18.909 14:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:27.046 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:27.046 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:27.046 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:27.047 Found net devices under 0000:31:00.0: cvl_0_0 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:27.047 Found net devices under 0000:31:00.1: cvl_0_1 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:27.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:27.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.520 ms 00:42:27.047 00:42:27.047 --- 10.0.0.2 ping statistics --- 00:42:27.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:27.047 rtt min/avg/max/mdev = 0.520/0.520/0.520/0.000 ms 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:27.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:27.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:42:27.047 00:42:27.047 --- 10.0.0.1 ping statistics --- 00:42:27.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:27.047 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=2042463 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 2042463 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2042463 ']' 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:27.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:27.047 14:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:27.048 [2024-10-13 14:38:29.781716] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:27.048 [2024-10-13 14:38:29.782687] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:42:27.048 [2024-10-13 14:38:29.782726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:27.048 [2024-10-13 14:38:29.919757] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:27.048 [2024-10-13 14:38:29.967054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:27.048 [2024-10-13 14:38:29.986579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:27.048 [2024-10-13 14:38:29.986608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:27.048 [2024-10-13 14:38:29.986615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:27.048 [2024-10-13 14:38:29.986622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:27.048 [2024-10-13 14:38:29.986628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:27.048 [2024-10-13 14:38:29.988371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:27.048 [2024-10-13 14:38:29.988520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:27.048 [2024-10-13 14:38:29.988660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:27.048 [2024-10-13 14:38:29.988661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:27.048 [2024-10-13 14:38:29.988916] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:27.048 [2024-10-13 14:38:30.667191] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:27.048 [2024-10-13 14:38:30.667289] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:27.048 [2024-10-13 14:38:30.667647] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:27.048 [2024-10-13 14:38:30.667788] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:27.048 [2024-10-13 14:38:30.677146] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:27.048 Malloc0 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:27.048 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:27.048 [2024-10-13 14:38:30.749755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:27.310 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:27.310 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2042565 00:42:27.310 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2042567 00:42:27.310 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:42:27.310 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:42:27.310 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:42:27.310 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:42:27.310 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:42:27.310 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:42:27.310 { 00:42:27.310 "params": { 00:42:27.310 "name": "Nvme$subsystem", 00:42:27.310 "trtype": "$TEST_TRANSPORT", 00:42:27.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:27.310 "adrfam": "ipv4", 00:42:27.310 "trsvcid": "$NVMF_PORT", 00:42:27.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:27.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:27.310 "hdgst": ${hdgst:-false}, 00:42:27.310 "ddgst": ${ddgst:-false} 00:42:27.310 }, 00:42:27.310 "method": "bdev_nvme_attach_controller" 00:42:27.310 } 00:42:27.310 EOF 00:42:27.310 )") 00:42:27.310 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2042569 00:42:27.310 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:42:27.310 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:42:27.310 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:42:27.310 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:42:27.310 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:42:27.310 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:42:27.310 { 00:42:27.310 "params": { 00:42:27.310 "name": "Nvme$subsystem", 00:42:27.310 "trtype": "$TEST_TRANSPORT", 00:42:27.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:27.310 "adrfam": "ipv4", 00:42:27.310 "trsvcid": "$NVMF_PORT", 00:42:27.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:27.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:27.310 "hdgst": ${hdgst:-false}, 00:42:27.310 "ddgst": ${ddgst:-false} 00:42:27.310 }, 00:42:27.310 "method": "bdev_nvme_attach_controller" 00:42:27.310 } 00:42:27.310 EOF 00:42:27.310 )") 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2042572 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:42:27.311 { 00:42:27.311 "params": { 00:42:27.311 "name": "Nvme$subsystem", 00:42:27.311 "trtype": "$TEST_TRANSPORT", 00:42:27.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:27.311 "adrfam": "ipv4", 00:42:27.311 "trsvcid": "$NVMF_PORT", 00:42:27.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:27.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:27.311 "hdgst": ${hdgst:-false}, 00:42:27.311 "ddgst": ${ddgst:-false} 00:42:27.311 }, 00:42:27.311 "method": "bdev_nvme_attach_controller" 00:42:27.311 } 00:42:27.311 EOF 00:42:27.311 )") 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:42:27.311 { 00:42:27.311 "params": { 00:42:27.311 "name": "Nvme$subsystem", 00:42:27.311 "trtype": "$TEST_TRANSPORT", 00:42:27.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:27.311 "adrfam": "ipv4", 00:42:27.311 "trsvcid": "$NVMF_PORT", 00:42:27.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:27.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:27.311 "hdgst": ${hdgst:-false}, 00:42:27.311 "ddgst": ${ddgst:-false} 00:42:27.311 }, 00:42:27.311 "method": "bdev_nvme_attach_controller" 00:42:27.311 } 00:42:27.311 EOF 00:42:27.311 )") 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2042565 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:42:27.311 "params": { 00:42:27.311 "name": "Nvme1", 00:42:27.311 "trtype": "tcp", 00:42:27.311 "traddr": "10.0.0.2", 00:42:27.311 "adrfam": "ipv4", 00:42:27.311 "trsvcid": "4420", 00:42:27.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:27.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:27.311 "hdgst": false, 00:42:27.311 "ddgst": false 00:42:27.311 }, 00:42:27.311 "method": "bdev_nvme_attach_controller" 00:42:27.311 }' 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:42:27.311 "params": { 00:42:27.311 "name": "Nvme1", 00:42:27.311 "trtype": "tcp", 00:42:27.311 "traddr": "10.0.0.2", 00:42:27.311 "adrfam": "ipv4", 00:42:27.311 "trsvcid": "4420", 00:42:27.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:27.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:27.311 "hdgst": false, 00:42:27.311 "ddgst": false 00:42:27.311 }, 00:42:27.311 "method": "bdev_nvme_attach_controller" 00:42:27.311 }' 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:42:27.311 "params": { 00:42:27.311 "name": "Nvme1", 00:42:27.311 "trtype": "tcp", 00:42:27.311 "traddr": "10.0.0.2", 00:42:27.311 "adrfam": "ipv4", 00:42:27.311 "trsvcid": "4420", 00:42:27.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:27.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:27.311 "hdgst": false, 00:42:27.311 "ddgst": false 00:42:27.311 }, 00:42:27.311 "method": "bdev_nvme_attach_controller" 00:42:27.311 }' 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:42:27.311 14:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:42:27.311 "params": { 00:42:27.311 "name": "Nvme1", 00:42:27.311 "trtype": "tcp", 00:42:27.311 "traddr": "10.0.0.2", 00:42:27.311 "adrfam": "ipv4", 00:42:27.311 "trsvcid": "4420", 00:42:27.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:27.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:27.311 "hdgst": false, 00:42:27.311 "ddgst": false 00:42:27.311 }, 00:42:27.311 "method": "bdev_nvme_attach_controller" 00:42:27.311 }' 00:42:27.311 [2024-10-13 14:38:30.809795] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:42:27.311 [2024-10-13 14:38:30.809894] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:42:27.311 [2024-10-13 14:38:30.810594] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:42:27.311 [2024-10-13 14:38:30.810659] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:42:27.311 [2024-10-13 14:38:30.811038] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:42:27.311 [2024-10-13 14:38:30.811114] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:42:27.311 [2024-10-13 14:38:30.814307] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:42:27.311 [2024-10-13 14:38:30.814382] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:42:27.573 [2024-10-13 14:38:31.083426] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:27.573 [2024-10-13 14:38:31.133124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:27.573 [2024-10-13 14:38:31.164386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:42:27.573 [2024-10-13 14:38:31.172670] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:27.573 [2024-10-13 14:38:31.216992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:27.573 [2024-10-13 14:38:31.233768] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:27.573 [2024-10-13 14:38:31.236405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:27.834 [2024-10-13 14:38:31.283574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:27.834 [2024-10-13 14:38:31.289005] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:27.834 [2024-10-13 14:38:31.299799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:42:27.834 Running I/O for 1 seconds... 00:42:27.834 [2024-10-13 14:38:31.338295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:27.834 [2024-10-13 14:38:31.354530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:42:27.834 Running I/O for 1 seconds... 00:42:28.095 Running I/O for 1 seconds... 00:42:28.095 Running I/O for 1 seconds... 00:42:28.665 182712.00 IOPS, 713.72 MiB/s 00:42:28.666 Latency(us) 00:42:28.666 [2024-10-13T12:38:32.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:28.666 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:42:28.666 Nvme1n1 : 1.00 182344.73 712.28 0.00 0.00 698.07 301.08 2011.73 00:42:28.666 [2024-10-13T12:38:32.373Z] =================================================================================================================== 00:42:28.666 [2024-10-13T12:38:32.373Z] Total : 182344.73 712.28 0.00 0.00 698.07 301.08 2011.73 00:42:28.926 8932.00 IOPS, 34.89 MiB/s 00:42:28.926 Latency(us) 00:42:28.926 [2024-10-13T12:38:32.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:28.926 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:42:28.926 Nvme1n1 : 1.02 8908.64 34.80 0.00 0.00 14256.57 2230.70 24414.51 00:42:28.926 [2024-10-13T12:38:32.633Z] =================================================================================================================== 00:42:28.926 [2024-10-13T12:38:32.633Z] Total : 8908.64 34.80 0.00 0.00 14256.57 2230.70 24414.51 00:42:28.926 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2042567 00:42:28.926 12185.00 IOPS, 47.60 MiB/s 00:42:28.926 Latency(us) 00:42:28.926 [2024-10-13T12:38:32.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:28.926 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:42:28.926 Nvme1n1 : 1.01 12228.95 47.77 0.00 0.00 10429.07 5364.62 16969.73 00:42:28.926 [2024-10-13T12:38:32.633Z] =================================================================================================================== 00:42:28.926 [2024-10-13T12:38:32.633Z] Total : 12228.95 47.77 0.00 0.00 10429.07 5364.62 16969.73 00:42:29.187 9225.00 IOPS, 36.04 MiB/s 00:42:29.187 Latency(us) 00:42:29.187 [2024-10-13T12:38:32.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:29.187 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:42:29.187 Nvme1n1 : 1.01 9340.61 36.49 0.00 0.00 13670.06 3435.00 35472.21 00:42:29.187 [2024-10-13T12:38:32.894Z] =================================================================================================================== 00:42:29.187 [2024-10-13T12:38:32.894Z] Total : 9340.61 36.49 0.00 0.00 13670.06 3435.00 35472.21 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2042569 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2042572 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:29.187 rmmod nvme_tcp 00:42:29.187 rmmod nvme_fabrics 00:42:29.187 rmmod nvme_keyring 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 2042463 ']' 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 2042463 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2042463 ']' 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2042463 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:29.187 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2042463 00:42:29.448 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:29.448 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:29.448 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2042463' 00:42:29.448 killing process with pid 2042463 00:42:29.448 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2042463 00:42:29.448 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2042463 00:42:29.448 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:29.448 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:29.448 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:29.448 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:42:29.448 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:42:29.448 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:42:29.448 14:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:29.448 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:29.448 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:29.448 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:29.448 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:29.448 14:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:31.994 00:42:31.994 real 0m12.993s 00:42:31.994 user 0m15.883s 00:42:31.994 sys 0m7.601s 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:42:31.994 ************************************ 00:42:31.994 END TEST nvmf_bdev_io_wait 00:42:31.994 ************************************ 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:31.994 ************************************ 00:42:31.994 START TEST nvmf_queue_depth 00:42:31.994 ************************************ 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:42:31.994 * Looking for test storage... 00:42:31.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:31.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.994 --rc genhtml_branch_coverage=1 00:42:31.994 --rc genhtml_function_coverage=1 00:42:31.994 --rc genhtml_legend=1 00:42:31.994 --rc geninfo_all_blocks=1 00:42:31.994 --rc geninfo_unexecuted_blocks=1 00:42:31.994 00:42:31.994 ' 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:31.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.994 --rc genhtml_branch_coverage=1 00:42:31.994 --rc genhtml_function_coverage=1 00:42:31.994 --rc genhtml_legend=1 00:42:31.994 --rc geninfo_all_blocks=1 00:42:31.994 --rc geninfo_unexecuted_blocks=1 00:42:31.994 00:42:31.994 ' 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:31.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.994 --rc genhtml_branch_coverage=1 00:42:31.994 --rc genhtml_function_coverage=1 00:42:31.994 --rc genhtml_legend=1 00:42:31.994 --rc geninfo_all_blocks=1 00:42:31.994 --rc geninfo_unexecuted_blocks=1 00:42:31.994 00:42:31.994 ' 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:31.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.994 --rc genhtml_branch_coverage=1 00:42:31.994 --rc genhtml_function_coverage=1 00:42:31.994 --rc genhtml_legend=1 00:42:31.994 --rc geninfo_all_blocks=1 00:42:31.994 --rc geninfo_unexecuted_blocks=1 00:42:31.994 00:42:31.994 ' 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:31.994 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:42:31.995 14:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:40.134 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:40.135 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:40.135 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:40.135 Found net devices under 0000:31:00.0: cvl_0_0 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:40.135 Found net devices under 0000:31:00.1: cvl_0_1 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:40.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:40.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:42:40.135 00:42:40.135 --- 10.0.0.2 ping statistics --- 00:42:40.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:40.135 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:40.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:40.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:42:40.135 00:42:40.135 --- 10.0.0.1 ping statistics --- 00:42:40.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:40.135 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=2047315 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 2047315 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2047315 ']' 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:40.135 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:40.136 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:40.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:40.136 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:40.136 14:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:40.136 [2024-10-13 14:38:42.943819] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:40.136 [2024-10-13 14:38:42.944996] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:42:40.136 [2024-10-13 14:38:42.945045] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:40.136 [2024-10-13 14:38:43.090784] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:40.136 [2024-10-13 14:38:43.141775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:40.136 [2024-10-13 14:38:43.167666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:40.136 [2024-10-13 14:38:43.167708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:40.136 [2024-10-13 14:38:43.167717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:40.136 [2024-10-13 14:38:43.167724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:40.136 [2024-10-13 14:38:43.167730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:40.136 [2024-10-13 14:38:43.168486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:40.136 [2024-10-13 14:38:43.231296] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:40.136 [2024-10-13 14:38:43.231578] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:40.136 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:40.136 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:42:40.136 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:42:40.136 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:40.136 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:40.136 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:40.136 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:40.136 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:40.136 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:40.136 [2024-10-13 14:38:43.821381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:40.136 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:40.136 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:40.136 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:40.136 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:40.398 Malloc0 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:40.398 [2024-10-13 14:38:43.905558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2047375 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2047375 /var/tmp/bdevperf.sock 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2047375 ']' 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:40.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:40.398 14:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:40.398 [2024-10-13 14:38:43.961763] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:42:40.398 [2024-10-13 14:38:43.961827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2047375 ] 00:42:40.398 [2024-10-13 14:38:44.096418] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:40.659 [2024-10-13 14:38:44.144462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:40.659 [2024-10-13 14:38:44.172611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:41.230 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:41.230 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:42:41.230 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:42:41.230 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:41.230 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:41.230 NVMe0n1 00:42:41.230 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:41.230 14:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:42:41.230 Running I/O for 10 seconds... 00:42:43.575 8785.00 IOPS, 34.32 MiB/s [2024-10-13T12:38:48.222Z] 9018.50 IOPS, 35.23 MiB/s [2024-10-13T12:38:49.162Z] 9624.00 IOPS, 37.59 MiB/s [2024-10-13T12:38:50.103Z] 10510.75 IOPS, 41.06 MiB/s [2024-10-13T12:38:51.045Z] 11215.40 IOPS, 43.81 MiB/s [2024-10-13T12:38:51.988Z] 11626.00 IOPS, 45.41 MiB/s [2024-10-13T12:38:52.929Z] 11965.71 IOPS, 46.74 MiB/s [2024-10-13T12:38:54.314Z] 12181.38 IOPS, 47.58 MiB/s [2024-10-13T12:38:55.255Z] 12385.67 IOPS, 48.38 MiB/s [2024-10-13T12:38:55.255Z] 12513.10 IOPS, 48.88 MiB/s 00:42:51.548 Latency(us) 00:42:51.548 [2024-10-13T12:38:55.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:51.548 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:42:51.548 Verification LBA range: start 0x0 length 0x4000 00:42:51.548 NVMe0n1 : 10.05 12554.88 49.04 0.00 0.00 81296.56 15984.39 65251.35 00:42:51.548 [2024-10-13T12:38:55.255Z] =================================================================================================================== 00:42:51.548 [2024-10-13T12:38:55.255Z] Total : 12554.88 49.04 0.00 0.00 81296.56 15984.39 65251.35 00:42:51.548 { 00:42:51.548 "results": [ 00:42:51.548 { 00:42:51.548 "job": "NVMe0n1", 00:42:51.548 "core_mask": "0x1", 00:42:51.548 "workload": "verify", 00:42:51.548 "status": "finished", 00:42:51.548 "verify_range": { 00:42:51.548 "start": 0, 00:42:51.548 "length": 16384 00:42:51.548 }, 00:42:51.548 "queue_depth": 1024, 00:42:51.548 "io_size": 4096, 00:42:51.548 "runtime": 10.04645, 00:42:51.548 "iops": 12554.882570460213, 00:42:51.548 "mibps": 49.04251004086021, 00:42:51.548 "io_failed": 0, 00:42:51.548 "io_timeout": 0, 00:42:51.548 "avg_latency_us": 81296.55983243346, 00:42:51.548 "min_latency_us": 15984.390243902439, 00:42:51.548 "max_latency_us": 65251.346475108585 00:42:51.548 } 00:42:51.548 ], 00:42:51.548 "core_count": 1 00:42:51.548 } 00:42:51.548 14:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2047375 00:42:51.548 14:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2047375 ']' 00:42:51.548 14:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2047375 00:42:51.548 14:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:42:51.548 14:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:51.548 14:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2047375 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2047375' 00:42:51.548 killing process with pid 2047375 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2047375 00:42:51.548 Received shutdown signal, test time was about 10.000000 seconds 00:42:51.548 00:42:51.548 Latency(us) 00:42:51.548 [2024-10-13T12:38:55.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:51.548 [2024-10-13T12:38:55.255Z] =================================================================================================================== 00:42:51.548 [2024-10-13T12:38:55.255Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2047375 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:51.548 rmmod nvme_tcp 00:42:51.548 rmmod nvme_fabrics 00:42:51.548 rmmod nvme_keyring 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 2047315 ']' 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 2047315 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2047315 ']' 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2047315 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:51.548 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2047315 00:42:51.809 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:51.809 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:51.809 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2047315' 00:42:51.809 killing process with pid 2047315 00:42:51.809 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2047315 00:42:51.809 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2047315 00:42:51.809 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:51.809 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:51.809 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:51.809 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:42:51.809 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:42:51.809 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:51.809 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:42:51.809 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:51.809 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:51.809 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:51.809 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:51.809 14:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:54.368 00:42:54.368 real 0m22.302s 00:42:54.368 user 0m24.328s 00:42:54.368 sys 0m7.277s 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:42:54.368 ************************************ 00:42:54.368 END TEST nvmf_queue_depth 00:42:54.368 ************************************ 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:54.368 ************************************ 00:42:54.368 START TEST nvmf_target_multipath 00:42:54.368 ************************************ 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:42:54.368 * Looking for test storage... 00:42:54.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:42:54.368 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:54.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:54.369 --rc genhtml_branch_coverage=1 00:42:54.369 --rc genhtml_function_coverage=1 00:42:54.369 --rc genhtml_legend=1 00:42:54.369 --rc geninfo_all_blocks=1 00:42:54.369 --rc geninfo_unexecuted_blocks=1 00:42:54.369 00:42:54.369 ' 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:54.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:54.369 --rc genhtml_branch_coverage=1 00:42:54.369 --rc genhtml_function_coverage=1 00:42:54.369 --rc genhtml_legend=1 00:42:54.369 --rc geninfo_all_blocks=1 00:42:54.369 --rc geninfo_unexecuted_blocks=1 00:42:54.369 00:42:54.369 ' 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:54.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:54.369 --rc genhtml_branch_coverage=1 00:42:54.369 --rc genhtml_function_coverage=1 00:42:54.369 --rc genhtml_legend=1 00:42:54.369 --rc geninfo_all_blocks=1 00:42:54.369 --rc geninfo_unexecuted_blocks=1 00:42:54.369 00:42:54.369 ' 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:54.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:54.369 --rc genhtml_branch_coverage=1 00:42:54.369 --rc genhtml_function_coverage=1 00:42:54.369 --rc genhtml_legend=1 00:42:54.369 --rc geninfo_all_blocks=1 00:42:54.369 --rc geninfo_unexecuted_blocks=1 00:42:54.369 00:42:54.369 ' 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:42:54.369 14:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:02.505 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:02.505 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:02.505 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:02.506 Found net devices under 0000:31:00.0: cvl_0_0 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:02.506 Found net devices under 0000:31:00.1: cvl_0_1 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:02.506 14:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:02.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:02.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:43:02.506 00:43:02.506 --- 10.0.0.2 ping statistics --- 00:43:02.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:02.506 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:02.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:02.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:43:02.506 00:43:02.506 --- 10.0.0.1 ping statistics --- 00:43:02.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:02.506 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:43:02.506 only one NIC for nvmf test 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:02.506 rmmod nvme_tcp 00:43:02.506 rmmod nvme_fabrics 00:43:02.506 rmmod nvme_keyring 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:02.506 14:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:03.891 00:43:03.891 real 0m9.885s 00:43:03.891 user 0m2.091s 00:43:03.891 sys 0m5.714s 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:43:03.891 ************************************ 00:43:03.891 END TEST nvmf_target_multipath 00:43:03.891 ************************************ 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:03.891 ************************************ 00:43:03.891 START TEST nvmf_zcopy 00:43:03.891 ************************************ 00:43:03.891 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:43:04.154 * Looking for test storage... 00:43:04.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:04.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:04.154 --rc genhtml_branch_coverage=1 00:43:04.154 --rc genhtml_function_coverage=1 00:43:04.154 --rc genhtml_legend=1 00:43:04.154 --rc geninfo_all_blocks=1 00:43:04.154 --rc geninfo_unexecuted_blocks=1 00:43:04.154 00:43:04.154 ' 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:04.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:04.154 --rc genhtml_branch_coverage=1 00:43:04.154 --rc genhtml_function_coverage=1 00:43:04.154 --rc genhtml_legend=1 00:43:04.154 --rc geninfo_all_blocks=1 00:43:04.154 --rc geninfo_unexecuted_blocks=1 00:43:04.154 00:43:04.154 ' 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:04.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:04.154 --rc genhtml_branch_coverage=1 00:43:04.154 --rc genhtml_function_coverage=1 00:43:04.154 --rc genhtml_legend=1 00:43:04.154 --rc geninfo_all_blocks=1 00:43:04.154 --rc geninfo_unexecuted_blocks=1 00:43:04.154 00:43:04.154 ' 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:04.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:04.154 --rc genhtml_branch_coverage=1 00:43:04.154 --rc genhtml_function_coverage=1 00:43:04.154 --rc genhtml_legend=1 00:43:04.154 --rc geninfo_all_blocks=1 00:43:04.154 --rc geninfo_unexecuted_blocks=1 00:43:04.154 00:43:04.154 ' 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.154 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:43:04.155 14:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:12.459 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:12.459 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:12.459 Found net devices under 0000:31:00.0: cvl_0_0 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:12.459 Found net devices under 0000:31:00.1: cvl_0_1 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:12.459 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:12.460 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:12.460 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:12.460 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:12.460 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:12.460 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:12.460 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:12.460 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:12.460 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:12.460 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:12.460 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:12.460 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:12.460 14:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:12.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:12.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:43:12.460 00:43:12.460 --- 10.0.0.2 ping statistics --- 00:43:12.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:12.460 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:12.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:12.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:43:12.460 00:43:12.460 --- 10.0.0.1 ping statistics --- 00:43:12.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:12.460 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=2058404 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 2058404 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2058404 ']' 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:12.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:12.460 14:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:12.460 [2024-10-13 14:39:15.228682] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:12.460 [2024-10-13 14:39:15.229767] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:43:12.460 [2024-10-13 14:39:15.229814] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:12.460 [2024-10-13 14:39:15.369850] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:12.460 [2024-10-13 14:39:15.417970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:12.460 [2024-10-13 14:39:15.434654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:12.460 [2024-10-13 14:39:15.434683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:12.460 [2024-10-13 14:39:15.434691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:12.460 [2024-10-13 14:39:15.434698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:12.460 [2024-10-13 14:39:15.434704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:12.460 [2024-10-13 14:39:15.435276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:12.460 [2024-10-13 14:39:15.483087] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:12.460 [2024-10-13 14:39:15.483337] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:12.460 [2024-10-13 14:39:16.079996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:12.460 [2024-10-13 14:39:16.108261] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:12.460 malloc0 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:43:12.460 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:12.461 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:12.461 { 00:43:12.461 "params": { 00:43:12.461 "name": "Nvme$subsystem", 00:43:12.461 "trtype": "$TEST_TRANSPORT", 00:43:12.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:12.461 "adrfam": "ipv4", 00:43:12.461 "trsvcid": "$NVMF_PORT", 00:43:12.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:12.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:12.461 "hdgst": ${hdgst:-false}, 00:43:12.461 "ddgst": ${ddgst:-false} 00:43:12.461 }, 00:43:12.461 "method": "bdev_nvme_attach_controller" 00:43:12.461 } 00:43:12.461 EOF 00:43:12.461 )") 00:43:12.461 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:43:12.721 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:43:12.721 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:43:12.721 14:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:43:12.721 "params": { 00:43:12.721 "name": "Nvme1", 00:43:12.721 "trtype": "tcp", 00:43:12.721 "traddr": "10.0.0.2", 00:43:12.721 "adrfam": "ipv4", 00:43:12.721 "trsvcid": "4420", 00:43:12.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:12.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:12.721 "hdgst": false, 00:43:12.721 "ddgst": false 00:43:12.721 }, 00:43:12.721 "method": "bdev_nvme_attach_controller" 00:43:12.721 }' 00:43:12.721 [2024-10-13 14:39:16.208155] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:43:12.721 [2024-10-13 14:39:16.208205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2058719 ] 00:43:12.721 [2024-10-13 14:39:16.338746] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:12.721 [2024-10-13 14:39:16.386789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:12.721 [2024-10-13 14:39:16.405253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:12.982 Running I/O for 10 seconds... 00:43:15.308 6376.00 IOPS, 49.81 MiB/s [2024-10-13T12:39:19.958Z] 6439.00 IOPS, 50.30 MiB/s [2024-10-13T12:39:20.902Z] 6482.00 IOPS, 50.64 MiB/s [2024-10-13T12:39:21.845Z] 6455.75 IOPS, 50.44 MiB/s [2024-10-13T12:39:22.785Z] 6758.20 IOPS, 52.80 MiB/s [2024-10-13T12:39:23.728Z] 7216.00 IOPS, 56.38 MiB/s [2024-10-13T12:39:24.668Z] 7542.71 IOPS, 58.93 MiB/s [2024-10-13T12:39:26.053Z] 7786.50 IOPS, 60.83 MiB/s [2024-10-13T12:39:26.625Z] 7979.44 IOPS, 62.34 MiB/s [2024-10-13T12:39:26.886Z] 8134.30 IOPS, 63.55 MiB/s 00:43:23.179 Latency(us) 00:43:23.179 [2024-10-13T12:39:26.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:23.179 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:43:23.179 Verification LBA range: start 0x0 length 0x1000 00:43:23.179 Nvme1n1 : 10.01 8136.61 63.57 0.00 0.00 15684.16 1450.64 27589.50 00:43:23.179 [2024-10-13T12:39:26.886Z] =================================================================================================================== 00:43:23.179 [2024-10-13T12:39:26.886Z] Total : 8136.61 63.57 0.00 0.00 15684.16 1450.64 27589.50 00:43:23.179 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2060652 00:43:23.179 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:43:23.179 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:23.179 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:43:23.179 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:43:23.179 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:43:23.179 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:43:23.179 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:23.179 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:23.179 { 00:43:23.179 "params": { 00:43:23.179 "name": "Nvme$subsystem", 00:43:23.179 "trtype": "$TEST_TRANSPORT", 00:43:23.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:23.179 "adrfam": "ipv4", 00:43:23.179 "trsvcid": "$NVMF_PORT", 00:43:23.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:23.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:23.179 "hdgst": ${hdgst:-false}, 00:43:23.179 "ddgst": ${ddgst:-false} 00:43:23.179 }, 00:43:23.179 "method": "bdev_nvme_attach_controller" 00:43:23.179 } 00:43:23.179 EOF 00:43:23.179 )") 00:43:23.179 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:43:23.179 [2024-10-13 14:39:26.735590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.179 [2024-10-13 14:39:26.735617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.179 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:43:23.179 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:43:23.179 14:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:43:23.179 "params": { 00:43:23.179 "name": "Nvme1", 00:43:23.179 "trtype": "tcp", 00:43:23.179 "traddr": "10.0.0.2", 00:43:23.179 "adrfam": "ipv4", 00:43:23.179 "trsvcid": "4420", 00:43:23.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:23.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:23.179 "hdgst": false, 00:43:23.179 "ddgst": false 00:43:23.179 }, 00:43:23.179 "method": "bdev_nvme_attach_controller" 00:43:23.179 }' 00:43:23.179 [2024-10-13 14:39:26.747558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.179 [2024-10-13 14:39:26.747567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.179 [2024-10-13 14:39:26.759555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.179 [2024-10-13 14:39:26.759564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.179 [2024-10-13 14:39:26.771555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.180 [2024-10-13 14:39:26.771564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.180 [2024-10-13 14:39:26.776919] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:43:23.180 [2024-10-13 14:39:26.776966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2060652 ] 00:43:23.180 [2024-10-13 14:39:26.783556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.180 [2024-10-13 14:39:26.783565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.180 [2024-10-13 14:39:26.795556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.180 [2024-10-13 14:39:26.795563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.180 [2024-10-13 14:39:26.807556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.180 [2024-10-13 14:39:26.807564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.180 [2024-10-13 14:39:26.819555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.180 [2024-10-13 14:39:26.819563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.180 [2024-10-13 14:39:26.831555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.180 [2024-10-13 14:39:26.831564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.180 [2024-10-13 14:39:26.843555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.180 [2024-10-13 14:39:26.843562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.180 [2024-10-13 14:39:26.855555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.180 [2024-10-13 14:39:26.855566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.180 [2024-10-13 14:39:26.867555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.180 [2024-10-13 14:39:26.867563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.180 [2024-10-13 14:39:26.879555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.180 [2024-10-13 14:39:26.879564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.441 [2024-10-13 14:39:26.891554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.441 [2024-10-13 14:39:26.891563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.441 [2024-10-13 14:39:26.903555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.441 [2024-10-13 14:39:26.903563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.441 [2024-10-13 14:39:26.907346] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:23.441 [2024-10-13 14:39:26.915554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.441 [2024-10-13 14:39:26.915561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.441 [2024-10-13 14:39:26.927555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.441 [2024-10-13 14:39:26.927562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.441 [2024-10-13 14:39:26.939555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.441 [2024-10-13 14:39:26.939563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.441 [2024-10-13 14:39:26.951555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.441 [2024-10-13 14:39:26.951562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.441 [2024-10-13 14:39:26.955145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:23.441 [2024-10-13 14:39:26.963557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.441 [2024-10-13 14:39:26.963565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.441 [2024-10-13 14:39:26.971023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:23.441 [2024-10-13 14:39:26.975556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.442 [2024-10-13 14:39:26.975565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.442 [2024-10-13 14:39:26.987565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.442 [2024-10-13 14:39:26.987578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.442 [2024-10-13 14:39:26.999559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.442 [2024-10-13 14:39:26.999570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.442 [2024-10-13 14:39:27.011556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.442 [2024-10-13 14:39:27.011568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.442 [2024-10-13 14:39:27.023557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.442 [2024-10-13 14:39:27.023567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.442 [2024-10-13 14:39:27.035566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.442 [2024-10-13 14:39:27.035583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.442 [2024-10-13 14:39:27.047558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.442 [2024-10-13 14:39:27.047568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.442 [2024-10-13 14:39:27.059558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.442 [2024-10-13 14:39:27.059571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.442 [2024-10-13 14:39:27.071555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.442 [2024-10-13 14:39:27.071563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.442 [2024-10-13 14:39:27.083555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.442 [2024-10-13 14:39:27.083562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.442 [2024-10-13 14:39:27.095555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.442 [2024-10-13 14:39:27.095562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.442 [2024-10-13 14:39:27.107555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.442 [2024-10-13 14:39:27.107565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.442 [2024-10-13 14:39:27.119556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.442 [2024-10-13 14:39:27.119565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.442 [2024-10-13 14:39:27.131555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.442 [2024-10-13 14:39:27.131563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.442 [2024-10-13 14:39:27.143555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.442 [2024-10-13 14:39:27.143563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-10-13 14:39:27.155556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.155566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-10-13 14:39:27.167554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.167562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-10-13 14:39:27.179554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.179562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-10-13 14:39:27.191554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.191562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-10-13 14:39:27.203556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.203565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-10-13 14:39:27.215555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.215562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-10-13 14:39:27.227554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.227562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-10-13 14:39:27.239559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.239566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-10-13 14:39:27.251557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.251569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-10-13 14:39:27.263558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.263571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 Running I/O for 5 seconds... 00:43:23.703 [2024-10-13 14:39:27.278305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.278322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-10-13 14:39:27.291602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.291621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-10-13 14:39:27.304614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.304630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-10-13 14:39:27.318625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.318641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-10-13 14:39:27.331799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.331815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-10-13 14:39:27.343357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.343373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-10-13 14:39:27.356450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.356465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-10-13 14:39:27.371105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.371121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-10-13 14:39:27.384122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.384136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.703 [2024-10-13 14:39:27.398701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.703 [2024-10-13 14:39:27.398717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.964 [2024-10-13 14:39:27.411974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.964 [2024-10-13 14:39:27.411989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.964 [2024-10-13 14:39:27.426714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.964 [2024-10-13 14:39:27.426729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.964 [2024-10-13 14:39:27.440084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.964 [2024-10-13 14:39:27.440099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.964 [2024-10-13 14:39:27.454216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.964 [2024-10-13 14:39:27.454231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.964 [2024-10-13 14:39:27.466847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.964 [2024-10-13 14:39:27.466862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.964 [2024-10-13 14:39:27.479480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.964 [2024-10-13 14:39:27.479495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.964 [2024-10-13 14:39:27.492238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.964 [2024-10-13 14:39:27.492253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.964 [2024-10-13 14:39:27.507262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.964 [2024-10-13 14:39:27.507277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.964 [2024-10-13 14:39:27.520149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.964 [2024-10-13 14:39:27.520164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.964 [2024-10-13 14:39:27.534794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.964 [2024-10-13 14:39:27.534809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.964 [2024-10-13 14:39:27.547785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.964 [2024-10-13 14:39:27.547803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.964 [2024-10-13 14:39:27.562655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.964 [2024-10-13 14:39:27.562670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.964 [2024-10-13 14:39:27.575644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.964 [2024-10-13 14:39:27.575659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.964 [2024-10-13 14:39:27.587036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.964 [2024-10-13 14:39:27.587051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.964 [2024-10-13 14:39:27.600101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.964 [2024-10-13 14:39:27.600116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.964 [2024-10-13 14:39:27.614874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.964 [2024-10-13 14:39:27.614890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.964 [2024-10-13 14:39:27.628212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.964 [2024-10-13 14:39:27.628227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.964 [2024-10-13 14:39:27.642681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.964 [2024-10-13 14:39:27.642696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:23.964 [2024-10-13 14:39:27.655838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:23.964 [2024-10-13 14:39:27.655852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.225 [2024-10-13 14:39:27.670344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.225 [2024-10-13 14:39:27.670359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.225 [2024-10-13 14:39:27.683303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.225 [2024-10-13 14:39:27.683318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.225 [2024-10-13 14:39:27.695866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.225 [2024-10-13 14:39:27.695881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.225 [2024-10-13 14:39:27.710686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.225 [2024-10-13 14:39:27.710701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.225 [2024-10-13 14:39:27.723511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.225 [2024-10-13 14:39:27.723526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.225 [2024-10-13 14:39:27.735602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.225 [2024-10-13 14:39:27.735616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.225 [2024-10-13 14:39:27.748251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.225 [2024-10-13 14:39:27.748265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.225 [2024-10-13 14:39:27.762988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.225 [2024-10-13 14:39:27.763004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.225 [2024-10-13 14:39:27.775971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.225 [2024-10-13 14:39:27.775986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.226 [2024-10-13 14:39:27.791260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.226 [2024-10-13 14:39:27.791275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.226 [2024-10-13 14:39:27.803822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.226 [2024-10-13 14:39:27.803837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.226 [2024-10-13 14:39:27.818768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.226 [2024-10-13 14:39:27.818783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.226 [2024-10-13 14:39:27.831468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.226 [2024-10-13 14:39:27.831483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.226 [2024-10-13 14:39:27.843352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.226 [2024-10-13 14:39:27.843367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.226 [2024-10-13 14:39:27.855799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.226 [2024-10-13 14:39:27.855814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.226 [2024-10-13 14:39:27.870815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.226 [2024-10-13 14:39:27.870830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.226 [2024-10-13 14:39:27.883598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.226 [2024-10-13 14:39:27.883613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.226 [2024-10-13 14:39:27.895687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.226 [2024-10-13 14:39:27.895703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.226 [2024-10-13 14:39:27.908418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.226 [2024-10-13 14:39:27.908433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.226 [2024-10-13 14:39:27.922453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.226 [2024-10-13 14:39:27.922469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:27.935066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:27.935082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:27.947745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:27.947760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:27.963115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:27.963131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:27.976092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:27.976107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:27.991274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:27.991289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:28.003526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:28.003541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:28.016475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:28.016490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:28.031014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:28.031030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:28.043811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:28.043827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:28.058981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:28.058997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:28.072241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:28.072256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:28.086325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:28.086341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:28.099113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:28.099128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:28.111943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:28.111958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:28.126403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:28.126418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:28.139422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:28.139437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:28.151684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:28.151700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:28.164178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:28.164193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:28.179234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:28.179251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.488 [2024-10-13 14:39:28.191935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.488 [2024-10-13 14:39:28.191950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.749 [2024-10-13 14:39:28.206810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.749 [2024-10-13 14:39:28.206825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.749 [2024-10-13 14:39:28.219810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.749 [2024-10-13 14:39:28.219825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.749 [2024-10-13 14:39:28.231481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.749 [2024-10-13 14:39:28.231496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.749 [2024-10-13 14:39:28.244321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.749 [2024-10-13 14:39:28.244337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.749 [2024-10-13 14:39:28.259279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.749 [2024-10-13 14:39:28.259294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.749 18748.00 IOPS, 146.47 MiB/s [2024-10-13T12:39:28.456Z] [2024-10-13 14:39:28.271943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.749 [2024-10-13 14:39:28.271958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.749 [2024-10-13 14:39:28.286686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.749 [2024-10-13 14:39:28.286703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.749 [2024-10-13 14:39:28.299640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.749 [2024-10-13 14:39:28.299656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.749 [2024-10-13 14:39:28.311402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.749 [2024-10-13 14:39:28.311417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.749 [2024-10-13 14:39:28.324508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.749 [2024-10-13 14:39:28.324524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.749 [2024-10-13 14:39:28.338951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.749 [2024-10-13 14:39:28.338967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.749 [2024-10-13 14:39:28.351998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.749 [2024-10-13 14:39:28.352013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.749 [2024-10-13 14:39:28.366982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.749 [2024-10-13 14:39:28.366998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.749 [2024-10-13 14:39:28.380053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.749 [2024-10-13 14:39:28.380073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.749 [2024-10-13 14:39:28.395055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.749 [2024-10-13 14:39:28.395075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.749 [2024-10-13 14:39:28.408021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.749 [2024-10-13 14:39:28.408036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.749 [2024-10-13 14:39:28.422576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.749 [2024-10-13 14:39:28.422592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.749 [2024-10-13 14:39:28.435943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.749 [2024-10-13 14:39:28.435958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:24.749 [2024-10-13 14:39:28.450812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:24.749 [2024-10-13 14:39:28.450827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.463699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.463715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.475382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.475397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.488266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.488281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.502840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.502856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.515965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.515980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.530860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.530876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.543734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.543749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.555459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.555479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.568263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.568278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.583013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.583028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.596016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.596031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.611042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.611057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.624123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.624139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.638832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.638847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.651596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.651611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.663244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.663258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.676128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.676143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.690754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.690771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.703792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.703807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.011 [2024-10-13 14:39:28.715979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.011 [2024-10-13 14:39:28.715994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.272 [2024-10-13 14:39:28.731102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.272 [2024-10-13 14:39:28.731117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.272 [2024-10-13 14:39:28.743998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.272 [2024-10-13 14:39:28.744013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.272 [2024-10-13 14:39:28.758879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.272 [2024-10-13 14:39:28.758894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.272 [2024-10-13 14:39:28.771786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.272 [2024-10-13 14:39:28.771801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.272 [2024-10-13 14:39:28.784326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.272 [2024-10-13 14:39:28.784341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.272 [2024-10-13 14:39:28.799166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.272 [2024-10-13 14:39:28.799189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.272 [2024-10-13 14:39:28.811753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.272 [2024-10-13 14:39:28.811772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.272 [2024-10-13 14:39:28.826992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.272 [2024-10-13 14:39:28.827008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.272 [2024-10-13 14:39:28.840164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.272 [2024-10-13 14:39:28.840178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.272 [2024-10-13 14:39:28.854601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.272 [2024-10-13 14:39:28.854616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.272 [2024-10-13 14:39:28.867990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.272 [2024-10-13 14:39:28.868005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.272 [2024-10-13 14:39:28.883131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.272 [2024-10-13 14:39:28.883146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.272 [2024-10-13 14:39:28.895717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.272 [2024-10-13 14:39:28.895732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.272 [2024-10-13 14:39:28.910552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.272 [2024-10-13 14:39:28.910567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.272 [2024-10-13 14:39:28.923858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.272 [2024-10-13 14:39:28.923872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.272 [2024-10-13 14:39:28.938923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.272 [2024-10-13 14:39:28.938938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.272 [2024-10-13 14:39:28.951841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.272 [2024-10-13 14:39:28.951856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.272 [2024-10-13 14:39:28.966749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.272 [2024-10-13 14:39:28.966764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.532 [2024-10-13 14:39:28.979530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.532 [2024-10-13 14:39:28.979545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.532 [2024-10-13 14:39:28.991274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.532 [2024-10-13 14:39:28.991288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.532 [2024-10-13 14:39:29.004419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.532 [2024-10-13 14:39:29.004434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.532 [2024-10-13 14:39:29.018670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.532 [2024-10-13 14:39:29.018685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.532 [2024-10-13 14:39:29.031478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.532 [2024-10-13 14:39:29.031493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.532 [2024-10-13 14:39:29.044203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.532 [2024-10-13 14:39:29.044218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.532 [2024-10-13 14:39:29.058223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.532 [2024-10-13 14:39:29.058238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.532 [2024-10-13 14:39:29.071124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.532 [2024-10-13 14:39:29.071143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.532 [2024-10-13 14:39:29.084194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.532 [2024-10-13 14:39:29.084208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.532 [2024-10-13 14:39:29.098747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.533 [2024-10-13 14:39:29.098763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.533 [2024-10-13 14:39:29.111878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.533 [2024-10-13 14:39:29.111892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.533 [2024-10-13 14:39:29.127184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.533 [2024-10-13 14:39:29.127199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.533 [2024-10-13 14:39:29.140000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.533 [2024-10-13 14:39:29.140014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.533 [2024-10-13 14:39:29.155011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.533 [2024-10-13 14:39:29.155026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.533 [2024-10-13 14:39:29.168080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.533 [2024-10-13 14:39:29.168094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.533 [2024-10-13 14:39:29.182921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.533 [2024-10-13 14:39:29.182937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.533 [2024-10-13 14:39:29.195868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.533 [2024-10-13 14:39:29.195882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.533 [2024-10-13 14:39:29.210133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.533 [2024-10-13 14:39:29.210148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.533 [2024-10-13 14:39:29.223162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.533 [2024-10-13 14:39:29.223177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.533 [2024-10-13 14:39:29.236287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.533 [2024-10-13 14:39:29.236302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.794 [2024-10-13 14:39:29.250519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.794 [2024-10-13 14:39:29.250535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.794 [2024-10-13 14:39:29.263539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.794 [2024-10-13 14:39:29.263554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.794 18772.00 IOPS, 146.66 MiB/s [2024-10-13T12:39:29.501Z] [2024-10-13 14:39:29.276029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.794 [2024-10-13 14:39:29.276043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.794 [2024-10-13 14:39:29.291090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.794 [2024-10-13 14:39:29.291104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.794 [2024-10-13 14:39:29.304104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.794 [2024-10-13 14:39:29.304119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.794 [2024-10-13 14:39:29.318843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.794 [2024-10-13 14:39:29.318858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.794 [2024-10-13 14:39:29.331490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.794 [2024-10-13 14:39:29.331506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.794 [2024-10-13 14:39:29.343952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.794 [2024-10-13 14:39:29.343966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.794 [2024-10-13 14:39:29.359149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.794 [2024-10-13 14:39:29.359164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.794 [2024-10-13 14:39:29.372017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.794 [2024-10-13 14:39:29.372032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.794 [2024-10-13 14:39:29.386967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.794 [2024-10-13 14:39:29.386984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.794 [2024-10-13 14:39:29.399847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.794 [2024-10-13 14:39:29.399862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.794 [2024-10-13 14:39:29.415133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.794 [2024-10-13 14:39:29.415148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.794 [2024-10-13 14:39:29.427815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.794 [2024-10-13 14:39:29.427829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.794 [2024-10-13 14:39:29.442610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.794 [2024-10-13 14:39:29.442625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.794 [2024-10-13 14:39:29.456009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.794 [2024-10-13 14:39:29.456023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.794 [2024-10-13 14:39:29.470618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.794 [2024-10-13 14:39:29.470633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.794 [2024-10-13 14:39:29.483457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.794 [2024-10-13 14:39:29.483472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:25.794 [2024-10-13 14:39:29.496242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:25.794 [2024-10-13 14:39:29.496256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.055 [2024-10-13 14:39:29.510801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.055 [2024-10-13 14:39:29.510816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.055 [2024-10-13 14:39:29.523738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.055 [2024-10-13 14:39:29.523753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.055 [2024-10-13 14:39:29.538804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.055 [2024-10-13 14:39:29.538820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.055 [2024-10-13 14:39:29.551613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.055 [2024-10-13 14:39:29.551628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.055 [2024-10-13 14:39:29.564094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.055 [2024-10-13 14:39:29.564109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.055 [2024-10-13 14:39:29.578686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.055 [2024-10-13 14:39:29.578702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.055 [2024-10-13 14:39:29.591581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.055 [2024-10-13 14:39:29.591596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.055 [2024-10-13 14:39:29.604246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.055 [2024-10-13 14:39:29.604260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.055 [2024-10-13 14:39:29.618790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.055 [2024-10-13 14:39:29.618805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.055 [2024-10-13 14:39:29.631756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.055 [2024-10-13 14:39:29.631770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.055 [2024-10-13 14:39:29.646719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.055 [2024-10-13 14:39:29.646734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.055 [2024-10-13 14:39:29.660025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.055 [2024-10-13 14:39:29.660039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.055 [2024-10-13 14:39:29.674838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.055 [2024-10-13 14:39:29.674853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.055 [2024-10-13 14:39:29.687734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.055 [2024-10-13 14:39:29.687748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.055 [2024-10-13 14:39:29.702729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.055 [2024-10-13 14:39:29.702744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.055 [2024-10-13 14:39:29.715696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.055 [2024-10-13 14:39:29.715712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.055 [2024-10-13 14:39:29.726955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.055 [2024-10-13 14:39:29.726971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.055 [2024-10-13 14:39:29.740473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.055 [2024-10-13 14:39:29.740488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.055 [2024-10-13 14:39:29.754615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.055 [2024-10-13 14:39:29.754631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.316 [2024-10-13 14:39:29.767279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.316 [2024-10-13 14:39:29.767294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.316 [2024-10-13 14:39:29.780021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.316 [2024-10-13 14:39:29.780037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.316 [2024-10-13 14:39:29.794338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.316 [2024-10-13 14:39:29.794354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.316 [2024-10-13 14:39:29.807156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.316 [2024-10-13 14:39:29.807171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.316 [2024-10-13 14:39:29.819809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.316 [2024-10-13 14:39:29.819824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.316 [2024-10-13 14:39:29.835263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.316 [2024-10-13 14:39:29.835278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.316 [2024-10-13 14:39:29.848098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.316 [2024-10-13 14:39:29.848113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.316 [2024-10-13 14:39:29.862994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.316 [2024-10-13 14:39:29.863009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.316 [2024-10-13 14:39:29.876039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.316 [2024-10-13 14:39:29.876054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.316 [2024-10-13 14:39:29.890485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.316 [2024-10-13 14:39:29.890500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.317 [2024-10-13 14:39:29.903870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.317 [2024-10-13 14:39:29.903885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.317 [2024-10-13 14:39:29.918669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.317 [2024-10-13 14:39:29.918684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.317 [2024-10-13 14:39:29.931443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.317 [2024-10-13 14:39:29.931458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.317 [2024-10-13 14:39:29.943243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.317 [2024-10-13 14:39:29.943258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.317 [2024-10-13 14:39:29.956271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.317 [2024-10-13 14:39:29.956285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.317 [2024-10-13 14:39:29.971363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.317 [2024-10-13 14:39:29.971378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.317 [2024-10-13 14:39:29.984395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.317 [2024-10-13 14:39:29.984410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.317 [2024-10-13 14:39:29.998957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.317 [2024-10-13 14:39:29.998973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.317 [2024-10-13 14:39:30.011644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.317 [2024-10-13 14:39:30.011662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 [2024-10-13 14:39:30.023396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.023412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 [2024-10-13 14:39:30.036018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.036034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 [2024-10-13 14:39:30.050740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.050756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 [2024-10-13 14:39:30.063805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.063819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 [2024-10-13 14:39:30.078890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.078906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 [2024-10-13 14:39:30.091861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.091882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 [2024-10-13 14:39:30.106827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.106843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 [2024-10-13 14:39:30.120257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.120272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 [2024-10-13 14:39:30.134985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.135001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 [2024-10-13 14:39:30.147906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.147920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 [2024-10-13 14:39:30.162868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.162884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 [2024-10-13 14:39:30.175920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.175935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 [2024-10-13 14:39:30.190749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.190764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 [2024-10-13 14:39:30.204126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.204142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 [2024-10-13 14:39:30.218401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.218416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 [2024-10-13 14:39:30.231790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.231807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 [2024-10-13 14:39:30.243703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.243719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 [2024-10-13 14:39:30.258897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.258913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 18785.67 IOPS, 146.76 MiB/s [2024-10-13T12:39:30.285Z] [2024-10-13 14:39:30.271363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.271378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.578 [2024-10-13 14:39:30.283304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.578 [2024-10-13 14:39:30.283319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.839 [2024-10-13 14:39:30.296181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.839 [2024-10-13 14:39:30.296196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.839 [2024-10-13 14:39:30.310600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.839 [2024-10-13 14:39:30.310615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.839 [2024-10-13 14:39:30.323375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.839 [2024-10-13 14:39:30.323391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.839 [2024-10-13 14:39:30.336233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.839 [2024-10-13 14:39:30.336249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.839 [2024-10-13 14:39:30.350932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.839 [2024-10-13 14:39:30.350952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.839 [2024-10-13 14:39:30.363873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.839 [2024-10-13 14:39:30.363888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.839 [2024-10-13 14:39:30.378193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.839 [2024-10-13 14:39:30.378210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.839 [2024-10-13 14:39:30.391095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.839 [2024-10-13 14:39:30.391111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.839 [2024-10-13 14:39:30.403562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.839 [2024-10-13 14:39:30.403578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.839 [2024-10-13 14:39:30.416339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.839 [2024-10-13 14:39:30.416355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.839 [2024-10-13 14:39:30.430918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.839 [2024-10-13 14:39:30.430934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.839 [2024-10-13 14:39:30.443783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.839 [2024-10-13 14:39:30.443798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.839 [2024-10-13 14:39:30.458473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.839 [2024-10-13 14:39:30.458488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.839 [2024-10-13 14:39:30.471446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.839 [2024-10-13 14:39:30.471461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.839 [2024-10-13 14:39:30.483827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.839 [2024-10-13 14:39:30.483842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.840 [2024-10-13 14:39:30.498945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.840 [2024-10-13 14:39:30.498960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.840 [2024-10-13 14:39:30.511288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.840 [2024-10-13 14:39:30.511303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.840 [2024-10-13 14:39:30.524158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.840 [2024-10-13 14:39:30.524173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:26.840 [2024-10-13 14:39:30.538913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:26.840 [2024-10-13 14:39:30.538927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.100 [2024-10-13 14:39:30.551744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.100 [2024-10-13 14:39:30.551759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.100 [2024-10-13 14:39:30.563673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.100 [2024-10-13 14:39:30.563688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.100 [2024-10-13 14:39:30.575760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.100 [2024-10-13 14:39:30.575775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.100 [2024-10-13 14:39:30.590807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.100 [2024-10-13 14:39:30.590822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.100 [2024-10-13 14:39:30.603599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.100 [2024-10-13 14:39:30.603618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.100 [2024-10-13 14:39:30.616127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.100 [2024-10-13 14:39:30.616142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.100 [2024-10-13 14:39:30.630594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.100 [2024-10-13 14:39:30.630608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.100 [2024-10-13 14:39:30.643553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.100 [2024-10-13 14:39:30.643568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.100 [2024-10-13 14:39:30.655525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.100 [2024-10-13 14:39:30.655540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.100 [2024-10-13 14:39:30.668438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.100 [2024-10-13 14:39:30.668452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.100 [2024-10-13 14:39:30.682600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.100 [2024-10-13 14:39:30.682614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.100 [2024-10-13 14:39:30.695478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.100 [2024-10-13 14:39:30.695493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.100 [2024-10-13 14:39:30.707453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.100 [2024-10-13 14:39:30.707468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.100 [2024-10-13 14:39:30.720168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.100 [2024-10-13 14:39:30.720183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.101 [2024-10-13 14:39:30.734882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.101 [2024-10-13 14:39:30.734897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.101 [2024-10-13 14:39:30.748157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.101 [2024-10-13 14:39:30.748172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.101 [2024-10-13 14:39:30.763049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.101 [2024-10-13 14:39:30.763070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.101 [2024-10-13 14:39:30.775281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.101 [2024-10-13 14:39:30.775297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.101 [2024-10-13 14:39:30.788090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.101 [2024-10-13 14:39:30.788104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.101 [2024-10-13 14:39:30.803032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.101 [2024-10-13 14:39:30.803048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.361 [2024-10-13 14:39:30.816049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.361 [2024-10-13 14:39:30.816067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.361 [2024-10-13 14:39:30.830890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.361 [2024-10-13 14:39:30.830905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.361 [2024-10-13 14:39:30.843390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.361 [2024-10-13 14:39:30.843405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.361 [2024-10-13 14:39:30.855772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.361 [2024-10-13 14:39:30.855791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.362 [2024-10-13 14:39:30.870497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.362 [2024-10-13 14:39:30.870511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.362 [2024-10-13 14:39:30.883404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.362 [2024-10-13 14:39:30.883419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.362 [2024-10-13 14:39:30.894907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.362 [2024-10-13 14:39:30.894922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.362 [2024-10-13 14:39:30.908080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.362 [2024-10-13 14:39:30.908094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.362 [2024-10-13 14:39:30.922764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.362 [2024-10-13 14:39:30.922779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.362 [2024-10-13 14:39:30.935647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.362 [2024-10-13 14:39:30.935662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.362 [2024-10-13 14:39:30.948294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.362 [2024-10-13 14:39:30.948309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.362 [2024-10-13 14:39:30.963145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.362 [2024-10-13 14:39:30.963160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.362 [2024-10-13 14:39:30.975805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.362 [2024-10-13 14:39:30.975819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.362 [2024-10-13 14:39:30.990984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.362 [2024-10-13 14:39:30.990999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.362 [2024-10-13 14:39:31.003857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.362 [2024-10-13 14:39:31.003871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.362 [2024-10-13 14:39:31.018428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.362 [2024-10-13 14:39:31.018443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.362 [2024-10-13 14:39:31.031597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.362 [2024-10-13 14:39:31.031612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.362 [2024-10-13 14:39:31.043601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.362 [2024-10-13 14:39:31.043616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.362 [2024-10-13 14:39:31.055705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.362 [2024-10-13 14:39:31.055719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 [2024-10-13 14:39:31.070522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.070537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 [2024-10-13 14:39:31.083725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.083739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 [2024-10-13 14:39:31.098319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.098335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 [2024-10-13 14:39:31.111331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.111346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 [2024-10-13 14:39:31.123633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.123648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 [2024-10-13 14:39:31.136621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.136636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 [2024-10-13 14:39:31.150371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.150386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 [2024-10-13 14:39:31.163619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.163634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 [2024-10-13 14:39:31.175480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.175495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 [2024-10-13 14:39:31.187890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.187905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 [2024-10-13 14:39:31.202851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.202866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 [2024-10-13 14:39:31.215598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.215613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 [2024-10-13 14:39:31.227218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.227232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 [2024-10-13 14:39:31.240208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.240222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 [2024-10-13 14:39:31.254325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.254340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 [2024-10-13 14:39:31.267026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.267041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 18814.25 IOPS, 146.99 MiB/s [2024-10-13T12:39:31.330Z] [2024-10-13 14:39:31.279438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.279453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 [2024-10-13 14:39:31.292371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.292386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 [2024-10-13 14:39:31.307434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.307449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.623 [2024-10-13 14:39:31.320130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.623 [2024-10-13 14:39:31.320145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.334726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.334742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.347794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.347809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.362821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.362837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.376051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.376072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.390961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.390977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.403601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.403616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.416100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.416115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.430932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.430948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.443482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.443498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.456557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.456572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.470593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.470608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.483399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.483414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.495573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.495588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.508604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.508619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.523045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.523060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.535921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.535936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.550760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.550776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.563210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.563225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.574955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.574971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:27.884 [2024-10-13 14:39:31.588327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:27.884 [2024-10-13 14:39:31.588342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.146 [2024-10-13 14:39:31.602901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.146 [2024-10-13 14:39:31.602923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.146 [2024-10-13 14:39:31.616040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.146 [2024-10-13 14:39:31.616055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.146 [2024-10-13 14:39:31.630742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.146 [2024-10-13 14:39:31.630758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.146 [2024-10-13 14:39:31.643304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.146 [2024-10-13 14:39:31.643319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.146 [2024-10-13 14:39:31.656172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.146 [2024-10-13 14:39:31.656186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.146 [2024-10-13 14:39:31.670697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.147 [2024-10-13 14:39:31.670712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.147 [2024-10-13 14:39:31.683874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.147 [2024-10-13 14:39:31.683888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.147 [2024-10-13 14:39:31.698499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.147 [2024-10-13 14:39:31.698514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.147 [2024-10-13 14:39:31.711612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.147 [2024-10-13 14:39:31.711627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.147 [2024-10-13 14:39:31.724391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.147 [2024-10-13 14:39:31.724406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.147 [2024-10-13 14:39:31.738921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.147 [2024-10-13 14:39:31.738937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.147 [2024-10-13 14:39:31.751750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.147 [2024-10-13 14:39:31.751765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.147 [2024-10-13 14:39:31.766583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.147 [2024-10-13 14:39:31.766598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.147 [2024-10-13 14:39:31.779907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.147 [2024-10-13 14:39:31.779922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.147 [2024-10-13 14:39:31.795210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.147 [2024-10-13 14:39:31.795225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.147 [2024-10-13 14:39:31.807931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.147 [2024-10-13 14:39:31.807946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.147 [2024-10-13 14:39:31.822490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.147 [2024-10-13 14:39:31.822506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.147 [2024-10-13 14:39:31.835692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.147 [2024-10-13 14:39:31.835707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.147 [2024-10-13 14:39:31.850846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.147 [2024-10-13 14:39:31.850861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.408 [2024-10-13 14:39:31.864169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.408 [2024-10-13 14:39:31.864189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.408 [2024-10-13 14:39:31.879053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.408 [2024-10-13 14:39:31.879073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.408 [2024-10-13 14:39:31.891978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.408 [2024-10-13 14:39:31.891993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.408 [2024-10-13 14:39:31.906617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.408 [2024-10-13 14:39:31.906632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.408 [2024-10-13 14:39:31.919551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.408 [2024-10-13 14:39:31.919566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.408 [2024-10-13 14:39:31.932442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.408 [2024-10-13 14:39:31.932457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.408 [2024-10-13 14:39:31.947460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.408 [2024-10-13 14:39:31.947475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.408 [2024-10-13 14:39:31.960175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.408 [2024-10-13 14:39:31.960189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.408 [2024-10-13 14:39:31.974749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.408 [2024-10-13 14:39:31.974765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.408 [2024-10-13 14:39:31.987739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.408 [2024-10-13 14:39:31.987753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.408 [2024-10-13 14:39:32.003048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.408 [2024-10-13 14:39:32.003068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.408 [2024-10-13 14:39:32.015904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.408 [2024-10-13 14:39:32.015918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.408 [2024-10-13 14:39:32.030926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.408 [2024-10-13 14:39:32.030943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.408 [2024-10-13 14:39:32.044020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.408 [2024-10-13 14:39:32.044035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.408 [2024-10-13 14:39:32.058968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.408 [2024-10-13 14:39:32.058984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.408 [2024-10-13 14:39:32.071345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.408 [2024-10-13 14:39:32.071361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.408 [2024-10-13 14:39:32.084318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.408 [2024-10-13 14:39:32.084333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.408 [2024-10-13 14:39:32.099171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.408 [2024-10-13 14:39:32.099186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.408 [2024-10-13 14:39:32.111892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.408 [2024-10-13 14:39:32.111907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.670 [2024-10-13 14:39:32.126977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.670 [2024-10-13 14:39:32.126997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.670 [2024-10-13 14:39:32.140011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.670 [2024-10-13 14:39:32.140026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.670 [2024-10-13 14:39:32.154900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.671 [2024-10-13 14:39:32.154916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.671 [2024-10-13 14:39:32.167408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.671 [2024-10-13 14:39:32.167424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.671 [2024-10-13 14:39:32.179992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.671 [2024-10-13 14:39:32.180007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.671 [2024-10-13 14:39:32.194773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.671 [2024-10-13 14:39:32.194788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.671 [2024-10-13 14:39:32.207597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.671 [2024-10-13 14:39:32.207612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.671 [2024-10-13 14:39:32.220503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.671 [2024-10-13 14:39:32.220519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.671 [2024-10-13 14:39:32.234209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.671 [2024-10-13 14:39:32.234225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.671 [2024-10-13 14:39:32.246967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.671 [2024-10-13 14:39:32.246983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.671 [2024-10-13 14:39:32.260034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.671 [2024-10-13 14:39:32.260049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.671 18814.80 IOPS, 146.99 MiB/s [2024-10-13T12:39:32.378Z] [2024-10-13 14:39:32.272418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.671 [2024-10-13 14:39:32.272433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.671 00:43:28.671 Latency(us) 00:43:28.671 [2024-10-13T12:39:32.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:28.671 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:43:28.671 Nvme1n1 : 5.01 18817.58 147.01 0.00 0.00 6796.29 2559.14 11550.36 00:43:28.671 [2024-10-13T12:39:32.378Z] =================================================================================================================== 00:43:28.671 [2024-10-13T12:39:32.378Z] Total : 18817.58 147.01 0.00 0.00 6796.29 2559.14 11550.36 00:43:28.671 [2024-10-13 14:39:32.283560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.671 [2024-10-13 14:39:32.283574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.671 [2024-10-13 14:39:32.295566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.671 [2024-10-13 14:39:32.295576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.671 [2024-10-13 14:39:32.307563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.671 [2024-10-13 14:39:32.307576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.671 [2024-10-13 14:39:32.319560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.671 [2024-10-13 14:39:32.319571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.671 [2024-10-13 14:39:32.331558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.671 [2024-10-13 14:39:32.331568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.671 [2024-10-13 14:39:32.343556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.671 [2024-10-13 14:39:32.343564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.671 [2024-10-13 14:39:32.355560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.671 [2024-10-13 14:39:32.355571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.671 [2024-10-13 14:39:32.367556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:43:28.671 [2024-10-13 14:39:32.367566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:28.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2060652) - No such process 00:43:28.671 14:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2060652 00:43:28.671 14:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:43:28.671 14:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:28.671 14:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:28.932 14:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:28.932 14:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:43:28.932 14:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:28.932 14:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:28.932 delay0 00:43:28.932 14:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:28.932 14:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:43:28.932 14:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:28.932 14:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:28.932 14:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:28.932 14:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:43:28.932 [2024-10-13 14:39:32.625648] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:43:37.073 Initializing NVMe Controllers 00:43:37.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:43:37.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:43:37.073 Initialization complete. Launching workers. 00:43:37.073 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 292, failed: 12321 00:43:37.073 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12548, failed to submit 65 00:43:37.073 success 12399, unsuccessful 149, failed 0 00:43:37.073 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:43:37.073 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:43:37.073 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:43:37.073 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:43:37.073 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:37.073 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:43:37.073 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:37.073 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:37.073 rmmod nvme_tcp 00:43:37.073 rmmod nvme_fabrics 00:43:37.073 rmmod nvme_keyring 00:43:37.073 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:37.073 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:43:37.073 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 2058404 ']' 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 2058404 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2058404 ']' 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2058404 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2058404 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2058404' 00:43:37.074 killing process with pid 2058404 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2058404 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2058404 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:37.074 14:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:38.018 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:38.018 00:43:38.018 real 0m34.177s 00:43:38.018 user 0m42.887s 00:43:38.018 sys 0m12.745s 00:43:38.018 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:38.018 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:43:38.018 ************************************ 00:43:38.018 END TEST nvmf_zcopy 00:43:38.018 ************************************ 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:38.279 ************************************ 00:43:38.279 START TEST nvmf_nmic 00:43:38.279 ************************************ 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:43:38.279 * Looking for test storage... 00:43:38.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:43:38.279 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:38.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:38.280 --rc genhtml_branch_coverage=1 00:43:38.280 --rc genhtml_function_coverage=1 00:43:38.280 --rc genhtml_legend=1 00:43:38.280 --rc geninfo_all_blocks=1 00:43:38.280 --rc geninfo_unexecuted_blocks=1 00:43:38.280 00:43:38.280 ' 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:38.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:38.280 --rc genhtml_branch_coverage=1 00:43:38.280 --rc genhtml_function_coverage=1 00:43:38.280 --rc genhtml_legend=1 00:43:38.280 --rc geninfo_all_blocks=1 00:43:38.280 --rc geninfo_unexecuted_blocks=1 00:43:38.280 00:43:38.280 ' 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:38.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:38.280 --rc genhtml_branch_coverage=1 00:43:38.280 --rc genhtml_function_coverage=1 00:43:38.280 --rc genhtml_legend=1 00:43:38.280 --rc geninfo_all_blocks=1 00:43:38.280 --rc geninfo_unexecuted_blocks=1 00:43:38.280 00:43:38.280 ' 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:38.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:38.280 --rc genhtml_branch_coverage=1 00:43:38.280 --rc genhtml_function_coverage=1 00:43:38.280 --rc genhtml_legend=1 00:43:38.280 --rc geninfo_all_blocks=1 00:43:38.280 --rc geninfo_unexecuted_blocks=1 00:43:38.280 00:43:38.280 ' 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:38.280 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:38.542 14:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:38.542 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:43:38.542 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:43:38.542 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:43:38.542 14:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:46.690 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:46.690 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:46.690 Found net devices under 0000:31:00.0: cvl_0_0 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:46.690 Found net devices under 0000:31:00.1: cvl_0_1 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:46.690 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:46.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:46.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.711 ms 00:43:46.691 00:43:46.691 --- 10.0.0.2 ping statistics --- 00:43:46.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:46.691 rtt min/avg/max/mdev = 0.711/0.711/0.711/0.000 ms 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:46.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:46.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:43:46.691 00:43:46.691 --- 10.0.0.1 ping statistics --- 00:43:46.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:46.691 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=2067121 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 2067121 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2067121 ']' 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:46.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:46.691 14:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:46.691 [2024-10-13 14:39:49.602552] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:46.691 [2024-10-13 14:39:49.603732] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:43:46.691 [2024-10-13 14:39:49.603783] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:46.691 [2024-10-13 14:39:49.749696] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:46.691 [2024-10-13 14:39:49.798190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:46.691 [2024-10-13 14:39:49.827637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:46.691 [2024-10-13 14:39:49.827684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:46.691 [2024-10-13 14:39:49.827692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:46.691 [2024-10-13 14:39:49.827699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:46.691 [2024-10-13 14:39:49.827706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:46.691 [2024-10-13 14:39:49.829549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:46.691 [2024-10-13 14:39:49.829680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:46.691 [2024-10-13 14:39:49.829841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:46.691 [2024-10-13 14:39:49.829841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:46.691 [2024-10-13 14:39:49.890494] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:46.691 [2024-10-13 14:39:49.891105] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:46.691 [2024-10-13 14:39:49.892075] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:46.691 [2024-10-13 14:39:49.892224] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:46.691 [2024-10-13 14:39:49.892390] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:46.953 [2024-10-13 14:39:50.486698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:46.953 Malloc0 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:46.953 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:46.953 [2024-10-13 14:39:50.575099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:43:46.954 test case1: single bdev can't be used in multiple subsystems 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:46.954 [2024-10-13 14:39:50.602304] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:43:46.954 [2024-10-13 14:39:50.602345] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:43:46.954 [2024-10-13 14:39:50.602354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:43:46.954 request: 00:43:46.954 { 00:43:46.954 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:43:46.954 "namespace": { 00:43:46.954 "bdev_name": "Malloc0", 00:43:46.954 "no_auto_visible": false 00:43:46.954 }, 00:43:46.954 "method": "nvmf_subsystem_add_ns", 00:43:46.954 "req_id": 1 00:43:46.954 } 00:43:46.954 Got JSON-RPC error response 00:43:46.954 response: 00:43:46.954 { 00:43:46.954 "code": -32602, 00:43:46.954 "message": "Invalid parameters" 00:43:46.954 } 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:43:46.954 Adding namespace failed - expected result. 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:43:46.954 test case2: host connect to nvmf target in multiple paths 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:46.954 [2024-10-13 14:39:50.614482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:46.954 14:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:47.558 14:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:43:47.831 14:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:43:47.831 14:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:43:47.831 14:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:43:47.831 14:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:43:47.831 14:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:43:50.380 14:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:43:50.380 14:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:43:50.380 14:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:43:50.380 14:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:43:50.380 14:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:43:50.380 14:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:43:50.380 14:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:50.380 [global] 00:43:50.380 thread=1 00:43:50.380 invalidate=1 00:43:50.380 rw=write 00:43:50.380 time_based=1 00:43:50.380 runtime=1 00:43:50.380 ioengine=libaio 00:43:50.380 direct=1 00:43:50.380 bs=4096 00:43:50.380 iodepth=1 00:43:50.380 norandommap=0 00:43:50.380 numjobs=1 00:43:50.380 00:43:50.380 verify_dump=1 00:43:50.380 verify_backlog=512 00:43:50.380 verify_state_save=0 00:43:50.380 do_verify=1 00:43:50.380 verify=crc32c-intel 00:43:50.380 [job0] 00:43:50.380 filename=/dev/nvme0n1 00:43:50.380 Could not set queue depth (nvme0n1) 00:43:50.380 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:50.380 fio-3.35 00:43:50.380 Starting 1 thread 00:43:51.323 00:43:51.323 job0: (groupid=0, jobs=1): err= 0: pid=2068033: Sun Oct 13 14:39:55 2024 00:43:51.323 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:43:51.323 slat (nsec): min=6999, max=56438, avg=27086.99, stdev=3017.39 00:43:51.323 clat (usec): min=458, max=1240, avg=989.84, stdev=97.52 00:43:51.323 lat (usec): min=465, max=1267, avg=1016.92, stdev=97.80 00:43:51.323 clat percentiles (usec): 00:43:51.323 | 1.00th=[ 553], 5.00th=[ 848], 10.00th=[ 898], 20.00th=[ 947], 00:43:51.323 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 996], 60.00th=[ 1012], 00:43:51.323 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1074], 95.00th=[ 1106], 00:43:51.323 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1237], 99.95th=[ 1237], 00:43:51.323 | 99.99th=[ 1237] 00:43:51.323 write: IOPS=818, BW=3273KiB/s (3351kB/s)(3276KiB/1001msec); 0 zone resets 00:43:51.323 slat (nsec): min=9173, max=69175, avg=29884.96, stdev=10392.58 00:43:51.323 clat (usec): min=213, max=804, avg=543.52, stdev=93.53 00:43:51.323 lat (usec): min=225, max=842, avg=573.41, stdev=98.39 00:43:51.323 clat percentiles (usec): 00:43:51.323 | 1.00th=[ 330], 5.00th=[ 383], 10.00th=[ 429], 20.00th=[ 453], 00:43:51.323 | 30.00th=[ 510], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 562], 00:43:51.323 | 70.00th=[ 603], 80.00th=[ 627], 90.00th=[ 668], 95.00th=[ 701], 00:43:51.323 | 99.00th=[ 725], 99.50th=[ 750], 99.90th=[ 807], 99.95th=[ 807], 00:43:51.323 | 99.99th=[ 807] 00:43:51.323 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:43:51.323 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:51.323 lat (usec) : 250=0.08%, 500=17.73%, 750=44.33%, 1000=18.93% 00:43:51.323 lat (msec) : 2=18.93% 00:43:51.323 cpu : usr=3.10%, sys=4.80%, ctx=1332, majf=0, minf=1 00:43:51.323 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:51.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:51.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:51.323 issued rwts: total=512,819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:51.323 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:51.323 00:43:51.323 Run status group 0 (all jobs): 00:43:51.323 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:43:51.323 WRITE: bw=3273KiB/s (3351kB/s), 3273KiB/s-3273KiB/s (3351kB/s-3351kB/s), io=3276KiB (3355kB), run=1001-1001msec 00:43:51.323 00:43:51.323 Disk stats (read/write): 00:43:51.323 nvme0n1: ios=562/642, merge=0/0, ticks=549/266, in_queue=815, util=92.79% 00:43:51.323 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:51.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:43:51.584 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:51.584 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:43:51.584 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:43:51.584 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:51.584 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:51.584 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:43:51.584 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:43:51.584 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:43:51.584 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:43:51.584 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:43:51.584 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:43:51.584 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:51.584 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:43:51.584 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:51.584 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:51.584 rmmod nvme_tcp 00:43:51.584 rmmod nvme_fabrics 00:43:51.584 rmmod nvme_keyring 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 2067121 ']' 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 2067121 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2067121 ']' 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2067121 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2067121 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2067121' 00:43:51.845 killing process with pid 2067121 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2067121 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2067121 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:51.845 14:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:54.393 00:43:54.393 real 0m15.802s 00:43:54.393 user 0m35.740s 00:43:54.393 sys 0m7.322s 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:43:54.393 ************************************ 00:43:54.393 END TEST nvmf_nmic 00:43:54.393 ************************************ 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:54.393 ************************************ 00:43:54.393 START TEST nvmf_fio_target 00:43:54.393 ************************************ 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:43:54.393 * Looking for test storage... 00:43:54.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:54.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:54.393 --rc genhtml_branch_coverage=1 00:43:54.393 --rc genhtml_function_coverage=1 00:43:54.393 --rc genhtml_legend=1 00:43:54.393 --rc geninfo_all_blocks=1 00:43:54.393 --rc geninfo_unexecuted_blocks=1 00:43:54.393 00:43:54.393 ' 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:54.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:54.393 --rc genhtml_branch_coverage=1 00:43:54.393 --rc genhtml_function_coverage=1 00:43:54.393 --rc genhtml_legend=1 00:43:54.393 --rc geninfo_all_blocks=1 00:43:54.393 --rc geninfo_unexecuted_blocks=1 00:43:54.393 00:43:54.393 ' 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:54.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:54.393 --rc genhtml_branch_coverage=1 00:43:54.393 --rc genhtml_function_coverage=1 00:43:54.393 --rc genhtml_legend=1 00:43:54.393 --rc geninfo_all_blocks=1 00:43:54.393 --rc geninfo_unexecuted_blocks=1 00:43:54.393 00:43:54.393 ' 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:54.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:54.393 --rc genhtml_branch_coverage=1 00:43:54.393 --rc genhtml_function_coverage=1 00:43:54.393 --rc genhtml_legend=1 00:43:54.393 --rc geninfo_all_blocks=1 00:43:54.393 --rc geninfo_unexecuted_blocks=1 00:43:54.393 00:43:54.393 ' 00:43:54.393 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:43:54.394 14:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:44:02.531 Found 0000:31:00.0 (0x8086 - 0x159b) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:44:02.531 Found 0000:31:00.1 (0x8086 - 0x159b) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:44:02.531 Found net devices under 0000:31:00.0: cvl_0_0 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:44:02.531 Found net devices under 0000:31:00.1: cvl_0_1 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:02.531 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:02.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:02.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:44:02.532 00:44:02.532 --- 10.0.0.2 ping statistics --- 00:44:02.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:02.532 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:02.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:02.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:44:02.532 00:44:02.532 --- 10.0.0.1 ping statistics --- 00:44:02.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:02.532 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=2072577 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 2072577 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2072577 ']' 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:02.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:02.532 14:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:02.532 [2024-10-13 14:40:05.578659] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:02.532 [2024-10-13 14:40:05.579784] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:44:02.532 [2024-10-13 14:40:05.579835] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:02.532 [2024-10-13 14:40:05.721920] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:44:02.532 [2024-10-13 14:40:05.769388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:02.532 [2024-10-13 14:40:05.797692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:02.532 [2024-10-13 14:40:05.797735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:02.532 [2024-10-13 14:40:05.797744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:02.532 [2024-10-13 14:40:05.797751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:02.532 [2024-10-13 14:40:05.797757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:02.532 [2024-10-13 14:40:05.799994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:02.532 [2024-10-13 14:40:05.800152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:02.532 [2024-10-13 14:40:05.800196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:02.532 [2024-10-13 14:40:05.800196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:02.532 [2024-10-13 14:40:05.862311] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:02.532 [2024-10-13 14:40:05.863636] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:44:02.532 [2024-10-13 14:40:05.863827] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:02.532 [2024-10-13 14:40:05.864465] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:02.532 [2024-10-13 14:40:05.864535] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:44:02.792 14:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:02.792 14:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:44:02.792 14:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:44:02.792 14:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:02.792 14:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:02.792 14:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:02.792 14:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:44:03.052 [2024-10-13 14:40:06.577204] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:03.052 14:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:03.313 14:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:44:03.313 14:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:03.573 14:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:44:03.573 14:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:03.573 14:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:44:03.573 14:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:03.834 14:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:44:03.834 14:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:44:04.094 14:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:04.356 14:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:44:04.356 14:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:04.356 14:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:44:04.356 14:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:44:04.620 14:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:44:04.620 14:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:44:04.880 14:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:44:05.141 14:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:44:05.141 14:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:05.141 14:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:44:05.141 14:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:44:05.401 14:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:05.661 [2024-10-13 14:40:09.141107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:05.661 14:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:44:05.661 14:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:44:05.922 14:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:44:06.494 14:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:44:06.494 14:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:44:06.494 14:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:44:06.494 14:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:44:06.494 14:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:44:06.494 14:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:44:08.406 14:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:44:08.406 14:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:44:08.406 14:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:44:08.406 14:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:44:08.406 14:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:44:08.406 14:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:44:08.406 14:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:44:08.406 [global] 00:44:08.406 thread=1 00:44:08.406 invalidate=1 00:44:08.406 rw=write 00:44:08.406 time_based=1 00:44:08.406 runtime=1 00:44:08.406 ioengine=libaio 00:44:08.406 direct=1 00:44:08.406 bs=4096 00:44:08.406 iodepth=1 00:44:08.406 norandommap=0 00:44:08.406 numjobs=1 00:44:08.406 00:44:08.406 verify_dump=1 00:44:08.406 verify_backlog=512 00:44:08.406 verify_state_save=0 00:44:08.406 do_verify=1 00:44:08.406 verify=crc32c-intel 00:44:08.406 [job0] 00:44:08.406 filename=/dev/nvme0n1 00:44:08.406 [job1] 00:44:08.406 filename=/dev/nvme0n2 00:44:08.406 [job2] 00:44:08.406 filename=/dev/nvme0n3 00:44:08.406 [job3] 00:44:08.406 filename=/dev/nvme0n4 00:44:08.675 Could not set queue depth (nvme0n1) 00:44:08.675 Could not set queue depth (nvme0n2) 00:44:08.675 Could not set queue depth (nvme0n3) 00:44:08.675 Could not set queue depth (nvme0n4) 00:44:08.934 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:08.934 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:08.934 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:08.934 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:08.934 fio-3.35 00:44:08.934 Starting 4 threads 00:44:10.333 00:44:10.333 job0: (groupid=0, jobs=1): err= 0: pid=2074006: Sun Oct 13 14:40:13 2024 00:44:10.333 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:44:10.333 slat (nsec): min=25506, max=61509, avg=26675.04, stdev=3010.11 00:44:10.333 clat (usec): min=648, max=1207, avg=986.86, stdev=65.19 00:44:10.333 lat (usec): min=674, max=1233, avg=1013.54, stdev=65.25 00:44:10.333 clat percentiles (usec): 00:44:10.333 | 1.00th=[ 742], 5.00th=[ 873], 10.00th=[ 906], 20.00th=[ 955], 00:44:10.333 | 30.00th=[ 971], 40.00th=[ 979], 50.00th=[ 988], 60.00th=[ 1004], 00:44:10.333 | 70.00th=[ 1020], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1074], 00:44:10.333 | 99.00th=[ 1123], 99.50th=[ 1139], 99.90th=[ 1205], 99.95th=[ 1205], 00:44:10.333 | 99.99th=[ 1205] 00:44:10.333 write: IOPS=726, BW=2905KiB/s (2975kB/s)(2908KiB/1001msec); 0 zone resets 00:44:10.333 slat (usec): min=9, max=1991, avg=33.48, stdev=73.66 00:44:10.333 clat (usec): min=202, max=1104, avg=615.35, stdev=123.96 00:44:10.333 lat (usec): min=216, max=3095, avg=648.83, stdev=157.81 00:44:10.333 clat percentiles (usec): 00:44:10.333 | 1.00th=[ 351], 5.00th=[ 396], 10.00th=[ 445], 20.00th=[ 506], 00:44:10.333 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:44:10.333 | 70.00th=[ 693], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 799], 00:44:10.334 | 99.00th=[ 889], 99.50th=[ 938], 99.90th=[ 1106], 99.95th=[ 1106], 00:44:10.334 | 99.99th=[ 1106] 00:44:10.334 bw ( KiB/s): min= 4096, max= 4096, per=46.29%, avg=4096.00, stdev= 0.00, samples=1 00:44:10.334 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:10.334 lat (usec) : 250=0.24%, 500=10.82%, 750=41.57%, 1000=29.38% 00:44:10.334 lat (msec) : 2=18.00% 00:44:10.334 cpu : usr=2.40%, sys=3.10%, ctx=1242, majf=0, minf=1 00:44:10.334 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:10.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:10.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:10.334 issued rwts: total=512,727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:10.334 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:10.334 job1: (groupid=0, jobs=1): err= 0: pid=2074013: Sun Oct 13 14:40:13 2024 00:44:10.334 read: IOPS=17, BW=71.6KiB/s (73.4kB/s)(72.0KiB/1005msec) 00:44:10.334 slat (nsec): min=25911, max=26828, avg=26223.11, stdev=184.02 00:44:10.334 clat (usec): min=8357, max=41807, avg=39227.81, stdev=7707.28 00:44:10.334 lat (usec): min=8383, max=41833, avg=39254.04, stdev=7707.36 00:44:10.334 clat percentiles (usec): 00:44:10.334 | 1.00th=[ 8356], 5.00th=[ 8356], 10.00th=[40633], 20.00th=[41157], 00:44:10.334 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:44:10.334 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:44:10.334 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:44:10.334 | 99.99th=[41681] 00:44:10.334 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:44:10.334 slat (nsec): min=9637, max=69017, avg=28322.48, stdev=10391.04 00:44:10.334 clat (usec): min=254, max=1039, avg=547.50, stdev=150.00 00:44:10.334 lat (usec): min=288, max=1073, avg=575.82, stdev=154.04 00:44:10.334 clat percentiles (usec): 00:44:10.334 | 1.00th=[ 314], 5.00th=[ 343], 10.00th=[ 363], 20.00th=[ 416], 00:44:10.334 | 30.00th=[ 465], 40.00th=[ 486], 50.00th=[ 510], 60.00th=[ 553], 00:44:10.334 | 70.00th=[ 627], 80.00th=[ 676], 90.00th=[ 766], 95.00th=[ 816], 00:44:10.334 | 99.00th=[ 930], 99.50th=[ 988], 99.90th=[ 1037], 99.95th=[ 1037], 00:44:10.334 | 99.99th=[ 1037] 00:44:10.334 bw ( KiB/s): min= 4096, max= 4096, per=46.29%, avg=4096.00, stdev= 0.00, samples=1 00:44:10.334 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:10.334 lat (usec) : 500=44.91%, 750=39.43%, 1000=11.89% 00:44:10.334 lat (msec) : 2=0.38%, 10=0.19%, 50=3.21% 00:44:10.334 cpu : usr=0.40%, sys=1.69%, ctx=530, majf=0, minf=2 00:44:10.334 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:10.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:10.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:10.334 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:10.334 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:10.334 job2: (groupid=0, jobs=1): err= 0: pid=2074022: Sun Oct 13 14:40:13 2024 00:44:10.334 read: IOPS=16, BW=67.7KiB/s (69.4kB/s)(68.0KiB/1004msec) 00:44:10.334 slat (nsec): min=25446, max=26051, avg=25737.71, stdev=197.29 00:44:10.334 clat (usec): min=1403, max=42081, avg=39577.67, stdev=9837.56 00:44:10.334 lat (usec): min=1429, max=42107, avg=39603.41, stdev=9837.53 00:44:10.334 clat percentiles (usec): 00:44:10.334 | 1.00th=[ 1401], 5.00th=[ 1401], 10.00th=[41681], 20.00th=[41681], 00:44:10.334 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:44:10.334 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:44:10.334 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:10.334 | 99.99th=[42206] 00:44:10.334 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:44:10.334 slat (nsec): min=9382, max=56071, avg=31071.07, stdev=8368.92 00:44:10.334 clat (usec): min=198, max=1026, avg=606.69, stdev=153.17 00:44:10.334 lat (usec): min=236, max=1059, avg=637.76, stdev=155.87 00:44:10.334 clat percentiles (usec): 00:44:10.334 | 1.00th=[ 253], 5.00th=[ 338], 10.00th=[ 383], 20.00th=[ 474], 00:44:10.334 | 30.00th=[ 529], 40.00th=[ 578], 50.00th=[ 619], 60.00th=[ 660], 00:44:10.334 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 791], 95.00th=[ 832], 00:44:10.334 | 99.00th=[ 938], 99.50th=[ 971], 99.90th=[ 1029], 99.95th=[ 1029], 00:44:10.334 | 99.99th=[ 1029] 00:44:10.334 bw ( KiB/s): min= 4096, max= 4096, per=46.29%, avg=4096.00, stdev= 0.00, samples=1 00:44:10.334 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:10.334 lat (usec) : 250=0.95%, 500=22.50%, 750=57.28%, 1000=15.88% 00:44:10.334 lat (msec) : 2=0.38%, 50=3.02% 00:44:10.334 cpu : usr=0.90%, sys=1.40%, ctx=529, majf=0, minf=1 00:44:10.334 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:10.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:10.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:10.334 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:10.334 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:10.334 job3: (groupid=0, jobs=1): err= 0: pid=2074028: Sun Oct 13 14:40:13 2024 00:44:10.334 read: IOPS=20, BW=82.1KiB/s (84.1kB/s)(84.0KiB/1023msec) 00:44:10.334 slat (nsec): min=7153, max=32677, avg=26347.86, stdev=6035.01 00:44:10.334 clat (usec): min=656, max=41638, avg=39072.96, stdev=8803.74 00:44:10.334 lat (usec): min=668, max=41645, avg=39099.31, stdev=8807.02 00:44:10.334 clat percentiles (usec): 00:44:10.334 | 1.00th=[ 660], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:44:10.334 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:44:10.334 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:44:10.334 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:44:10.334 | 99.99th=[41681] 00:44:10.334 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:44:10.334 slat (nsec): min=9267, max=60585, avg=21657.54, stdev=10116.71 00:44:10.334 clat (usec): min=118, max=702, avg=368.50, stdev=109.98 00:44:10.334 lat (usec): min=131, max=735, avg=390.16, stdev=111.14 00:44:10.334 clat percentiles (usec): 00:44:10.334 | 1.00th=[ 130], 5.00th=[ 215], 10.00th=[ 233], 20.00th=[ 255], 00:44:10.334 | 30.00th=[ 297], 40.00th=[ 334], 50.00th=[ 375], 60.00th=[ 400], 00:44:10.334 | 70.00th=[ 433], 80.00th=[ 469], 90.00th=[ 515], 95.00th=[ 537], 00:44:10.334 | 99.00th=[ 619], 99.50th=[ 652], 99.90th=[ 701], 99.95th=[ 701], 00:44:10.334 | 99.99th=[ 701] 00:44:10.334 bw ( KiB/s): min= 4096, max= 4096, per=46.29%, avg=4096.00, stdev= 0.00, samples=1 00:44:10.334 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:10.334 lat (usec) : 250=17.26%, 500=65.29%, 750=13.70% 00:44:10.334 lat (msec) : 50=3.75% 00:44:10.334 cpu : usr=0.49%, sys=1.66%, ctx=533, majf=0, minf=1 00:44:10.334 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:10.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:10.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:10.334 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:10.334 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:10.334 00:44:10.334 Run status group 0 (all jobs): 00:44:10.334 READ: bw=2221KiB/s (2274kB/s), 67.7KiB/s-2046KiB/s (69.4kB/s-2095kB/s), io=2272KiB (2327kB), run=1001-1023msec 00:44:10.334 WRITE: bw=8848KiB/s (9061kB/s), 2002KiB/s-2905KiB/s (2050kB/s-2975kB/s), io=9052KiB (9269kB), run=1001-1023msec 00:44:10.334 00:44:10.334 Disk stats (read/write): 00:44:10.334 nvme0n1: ios=524/512, merge=0/0, ticks=698/319, in_queue=1017, util=96.19% 00:44:10.334 nvme0n2: ios=45/512, merge=0/0, ticks=829/272, in_queue=1101, util=95.81% 00:44:10.334 nvme0n3: ios=12/512, merge=0/0, ticks=463/299, in_queue=762, util=88.43% 00:44:10.334 nvme0n4: ios=16/512, merge=0/0, ticks=616/190, in_queue=806, util=89.47% 00:44:10.334 14:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:44:10.334 [global] 00:44:10.334 thread=1 00:44:10.334 invalidate=1 00:44:10.334 rw=randwrite 00:44:10.334 time_based=1 00:44:10.334 runtime=1 00:44:10.334 ioengine=libaio 00:44:10.334 direct=1 00:44:10.334 bs=4096 00:44:10.334 iodepth=1 00:44:10.334 norandommap=0 00:44:10.334 numjobs=1 00:44:10.334 00:44:10.334 verify_dump=1 00:44:10.334 verify_backlog=512 00:44:10.334 verify_state_save=0 00:44:10.334 do_verify=1 00:44:10.334 verify=crc32c-intel 00:44:10.334 [job0] 00:44:10.334 filename=/dev/nvme0n1 00:44:10.334 [job1] 00:44:10.334 filename=/dev/nvme0n2 00:44:10.334 [job2] 00:44:10.334 filename=/dev/nvme0n3 00:44:10.334 [job3] 00:44:10.334 filename=/dev/nvme0n4 00:44:10.334 Could not set queue depth (nvme0n1) 00:44:10.334 Could not set queue depth (nvme0n2) 00:44:10.334 Could not set queue depth (nvme0n3) 00:44:10.334 Could not set queue depth (nvme0n4) 00:44:10.604 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:10.604 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:10.604 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:10.604 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:10.604 fio-3.35 00:44:10.604 Starting 4 threads 00:44:11.990 00:44:11.990 job0: (groupid=0, jobs=1): err= 0: pid=2074511: Sun Oct 13 14:40:15 2024 00:44:11.990 read: IOPS=678, BW=2713KiB/s (2778kB/s)(2716KiB/1001msec) 00:44:11.990 slat (nsec): min=2574, max=61496, avg=13435.90, stdev=8855.04 00:44:11.990 clat (usec): min=438, max=904, avg=755.14, stdev=74.00 00:44:11.990 lat (usec): min=448, max=928, avg=768.57, stdev=76.84 00:44:11.990 clat percentiles (usec): 00:44:11.990 | 1.00th=[ 523], 5.00th=[ 619], 10.00th=[ 660], 20.00th=[ 693], 00:44:11.990 | 30.00th=[ 734], 40.00th=[ 758], 50.00th=[ 766], 60.00th=[ 783], 00:44:11.990 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[ 832], 95.00th=[ 857], 00:44:11.990 | 99.00th=[ 889], 99.50th=[ 898], 99.90th=[ 906], 99.95th=[ 906], 00:44:11.990 | 99.99th=[ 906] 00:44:11.990 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:44:11.990 slat (nsec): min=3341, max=65017, avg=24700.51, stdev=12137.49 00:44:11.990 clat (usec): min=140, max=717, avg=433.62, stdev=76.16 00:44:11.990 lat (usec): min=151, max=751, avg=458.32, stdev=80.86 00:44:11.990 clat percentiles (usec): 00:44:11.990 | 1.00th=[ 239], 5.00th=[ 310], 10.00th=[ 334], 20.00th=[ 363], 00:44:11.990 | 30.00th=[ 388], 40.00th=[ 429], 50.00th=[ 453], 60.00th=[ 465], 00:44:11.990 | 70.00th=[ 478], 80.00th=[ 490], 90.00th=[ 515], 95.00th=[ 537], 00:44:11.990 | 99.00th=[ 619], 99.50th=[ 644], 99.90th=[ 685], 99.95th=[ 717], 00:44:11.990 | 99.99th=[ 717] 00:44:11.990 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:44:11.990 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:11.990 lat (usec) : 250=0.70%, 500=50.85%, 750=23.43%, 1000=25.01% 00:44:11.990 cpu : usr=2.20%, sys=2.90%, ctx=1706, majf=0, minf=1 00:44:11.990 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:11.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.990 issued rwts: total=679,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.990 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:11.990 job1: (groupid=0, jobs=1): err= 0: pid=2074512: Sun Oct 13 14:40:15 2024 00:44:11.990 read: IOPS=16, BW=66.0KiB/s (67.6kB/s)(68.0KiB/1030msec) 00:44:11.990 slat (nsec): min=8105, max=25795, avg=23487.59, stdev=5334.97 00:44:11.990 clat (usec): min=947, max=42105, avg=39423.43, stdev=9920.84 00:44:11.990 lat (usec): min=957, max=42131, avg=39446.91, stdev=9924.16 00:44:11.990 clat percentiles (usec): 00:44:11.990 | 1.00th=[ 947], 5.00th=[ 947], 10.00th=[41157], 20.00th=[41681], 00:44:11.990 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:44:11.990 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:44:11.990 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:11.990 | 99.99th=[42206] 00:44:11.990 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:44:11.990 slat (nsec): min=9382, max=75876, avg=30048.33, stdev=7284.99 00:44:11.990 clat (usec): min=241, max=1027, avg=663.07, stdev=133.32 00:44:11.990 lat (usec): min=250, max=1039, avg=693.12, stdev=135.32 00:44:11.990 clat percentiles (usec): 00:44:11.990 | 1.00th=[ 322], 5.00th=[ 433], 10.00th=[ 494], 20.00th=[ 553], 00:44:11.990 | 30.00th=[ 594], 40.00th=[ 635], 50.00th=[ 668], 60.00th=[ 701], 00:44:11.990 | 70.00th=[ 742], 80.00th=[ 783], 90.00th=[ 832], 95.00th=[ 873], 00:44:11.990 | 99.00th=[ 930], 99.50th=[ 963], 99.90th=[ 1029], 99.95th=[ 1029], 00:44:11.990 | 99.99th=[ 1029] 00:44:11.990 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:44:11.990 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:11.990 lat (usec) : 250=0.19%, 500=10.40%, 750=59.92%, 1000=26.28% 00:44:11.990 lat (msec) : 2=0.19%, 50=3.02% 00:44:11.990 cpu : usr=0.97%, sys=1.36%, ctx=531, majf=0, minf=1 00:44:11.990 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:11.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.990 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.990 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:11.990 job2: (groupid=0, jobs=1): err= 0: pid=2074520: Sun Oct 13 14:40:15 2024 00:44:11.990 read: IOPS=16, BW=65.4KiB/s (67.0kB/s)(68.0KiB/1040msec) 00:44:11.990 slat (nsec): min=25089, max=26900, avg=25993.88, stdev=346.34 00:44:11.991 clat (usec): min=41544, max=42099, avg=41931.33, stdev=129.30 00:44:11.991 lat (usec): min=41569, max=42126, avg=41957.32, stdev=129.53 00:44:11.991 clat percentiles (usec): 00:44:11.991 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:44:11.991 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:44:11.991 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:44:11.991 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:11.991 | 99.99th=[42206] 00:44:11.991 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:44:11.991 slat (nsec): min=9638, max=67748, avg=29552.33, stdev=8576.82 00:44:11.991 clat (usec): min=172, max=1093, avg=599.82, stdev=151.80 00:44:11.991 lat (usec): min=203, max=1125, avg=629.38, stdev=155.01 00:44:11.991 clat percentiles (usec): 00:44:11.991 | 1.00th=[ 281], 5.00th=[ 330], 10.00th=[ 408], 20.00th=[ 478], 00:44:11.991 | 30.00th=[ 523], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 635], 00:44:11.991 | 70.00th=[ 668], 80.00th=[ 717], 90.00th=[ 799], 95.00th=[ 865], 00:44:11.991 | 99.00th=[ 947], 99.50th=[ 1004], 99.90th=[ 1090], 99.95th=[ 1090], 00:44:11.991 | 99.99th=[ 1090] 00:44:11.991 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:44:11.991 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:11.991 lat (usec) : 250=0.19%, 500=22.50%, 750=59.74%, 1000=13.80% 00:44:11.991 lat (msec) : 2=0.57%, 50=3.21% 00:44:11.991 cpu : usr=0.77%, sys=1.44%, ctx=530, majf=0, minf=1 00:44:11.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:11.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.991 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.991 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:11.991 job3: (groupid=0, jobs=1): err= 0: pid=2074527: Sun Oct 13 14:40:15 2024 00:44:11.991 read: IOPS=16, BW=65.7KiB/s (67.3kB/s)(68.0KiB/1035msec) 00:44:11.991 slat (nsec): min=25715, max=26568, avg=26086.00, stdev=200.82 00:44:11.991 clat (usec): min=41266, max=42048, avg=41916.85, stdev=175.78 00:44:11.991 lat (usec): min=41292, max=42074, avg=41942.94, stdev=175.87 00:44:11.991 clat percentiles (usec): 00:44:11.991 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:44:11.991 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:44:11.991 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:44:11.991 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:11.991 | 99.99th=[42206] 00:44:11.991 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:44:11.991 slat (nsec): min=9711, max=50911, avg=30916.29, stdev=6593.81 00:44:11.991 clat (usec): min=184, max=1039, avg=589.24, stdev=155.85 00:44:11.991 lat (usec): min=214, max=1071, avg=620.16, stdev=157.22 00:44:11.991 clat percentiles (usec): 00:44:11.991 | 1.00th=[ 241], 5.00th=[ 318], 10.00th=[ 379], 20.00th=[ 445], 00:44:11.991 | 30.00th=[ 510], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 644], 00:44:11.991 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 783], 95.00th=[ 824], 00:44:11.991 | 99.00th=[ 938], 99.50th=[ 955], 99.90th=[ 1037], 99.95th=[ 1037], 00:44:11.991 | 99.99th=[ 1037] 00:44:11.991 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:44:11.991 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:44:11.991 lat (usec) : 250=1.32%, 500=25.52%, 750=55.58%, 1000=14.18% 00:44:11.991 lat (msec) : 2=0.19%, 50=3.21% 00:44:11.991 cpu : usr=0.77%, sys=1.55%, ctx=529, majf=0, minf=1 00:44:11.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:11.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.991 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.991 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:11.991 00:44:11.991 Run status group 0 (all jobs): 00:44:11.991 READ: bw=2808KiB/s (2875kB/s), 65.4KiB/s-2713KiB/s (67.0kB/s-2778kB/s), io=2920KiB (2990kB), run=1001-1040msec 00:44:11.991 WRITE: bw=9846KiB/s (10.1MB/s), 1969KiB/s-4092KiB/s (2016kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1040msec 00:44:11.991 00:44:11.991 Disk stats (read/write): 00:44:11.991 nvme0n1: ios=542/845, merge=0/0, ticks=781/368, in_queue=1149, util=99.80% 00:44:11.991 nvme0n2: ios=54/512, merge=0/0, ticks=608/311, in_queue=919, util=94.62% 00:44:11.991 nvme0n3: ios=16/512, merge=0/0, ticks=671/298, in_queue=969, util=89.43% 00:44:11.991 nvme0n4: ios=16/512, merge=0/0, ticks=671/286, in_queue=957, util=91.34% 00:44:11.991 14:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:44:11.991 [global] 00:44:11.991 thread=1 00:44:11.991 invalidate=1 00:44:11.991 rw=write 00:44:11.991 time_based=1 00:44:11.991 runtime=1 00:44:11.991 ioengine=libaio 00:44:11.991 direct=1 00:44:11.991 bs=4096 00:44:11.991 iodepth=128 00:44:11.991 norandommap=0 00:44:11.991 numjobs=1 00:44:11.991 00:44:11.991 verify_dump=1 00:44:11.991 verify_backlog=512 00:44:11.991 verify_state_save=0 00:44:11.991 do_verify=1 00:44:11.991 verify=crc32c-intel 00:44:11.991 [job0] 00:44:11.991 filename=/dev/nvme0n1 00:44:11.991 [job1] 00:44:11.991 filename=/dev/nvme0n2 00:44:11.991 [job2] 00:44:11.991 filename=/dev/nvme0n3 00:44:11.991 [job3] 00:44:11.991 filename=/dev/nvme0n4 00:44:11.991 Could not set queue depth (nvme0n1) 00:44:11.991 Could not set queue depth (nvme0n2) 00:44:11.991 Could not set queue depth (nvme0n3) 00:44:11.991 Could not set queue depth (nvme0n4) 00:44:12.252 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:12.252 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:12.252 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:12.252 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:12.252 fio-3.35 00:44:12.252 Starting 4 threads 00:44:13.638 00:44:13.638 job0: (groupid=0, jobs=1): err= 0: pid=2075036: Sun Oct 13 14:40:17 2024 00:44:13.638 read: IOPS=8768, BW=34.3MiB/s (35.9MB/s)(34.4MiB/1004msec) 00:44:13.638 slat (nsec): min=907, max=10627k, avg=57920.19, stdev=428380.15 00:44:13.638 clat (usec): min=2700, max=29508, avg=7627.55, stdev=3488.29 00:44:13.638 lat (usec): min=2706, max=29534, avg=7685.47, stdev=3516.81 00:44:13.638 clat percentiles (usec): 00:44:13.638 | 1.00th=[ 3556], 5.00th=[ 4555], 10.00th=[ 5014], 20.00th=[ 5538], 00:44:13.638 | 30.00th=[ 5800], 40.00th=[ 6194], 50.00th=[ 6718], 60.00th=[ 7111], 00:44:13.638 | 70.00th=[ 7570], 80.00th=[ 8455], 90.00th=[12125], 95.00th=[15664], 00:44:13.638 | 99.00th=[21365], 99.50th=[21627], 99.90th=[23725], 99.95th=[23725], 00:44:13.638 | 99.99th=[29492] 00:44:13.638 write: IOPS=9179, BW=35.9MiB/s (37.6MB/s)(36.0MiB/1004msec); 0 zone resets 00:44:13.638 slat (nsec): min=1550, max=10093k, avg=49042.37, stdev=361908.07 00:44:13.638 clat (usec): min=980, max=22162, avg=6536.26, stdev=2643.37 00:44:13.638 lat (usec): min=988, max=22172, avg=6585.30, stdev=2662.23 00:44:13.638 clat percentiles (usec): 00:44:13.638 | 1.00th=[ 2671], 5.00th=[ 3687], 10.00th=[ 4146], 20.00th=[ 4621], 00:44:13.638 | 30.00th=[ 5407], 40.00th=[ 5604], 50.00th=[ 5866], 60.00th=[ 6194], 00:44:13.638 | 70.00th=[ 7111], 80.00th=[ 7767], 90.00th=[ 9503], 95.00th=[11994], 00:44:13.638 | 99.00th=[16450], 99.50th=[21365], 99.90th=[21365], 99.95th=[22152], 00:44:13.638 | 99.99th=[22152] 00:44:13.638 bw ( KiB/s): min=28672, max=44840, per=40.76%, avg=36756.00, stdev=11432.50, samples=2 00:44:13.638 iops : min= 7168, max=11210, avg=9189.00, stdev=2858.13, samples=2 00:44:13.638 lat (usec) : 1000=0.01% 00:44:13.638 lat (msec) : 2=0.13%, 4=5.28%, 10=84.38%, 20=8.45%, 50=1.75% 00:44:13.638 cpu : usr=4.99%, sys=7.18%, ctx=685, majf=0, minf=1 00:44:13.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:44:13.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:13.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:13.638 issued rwts: total=8804,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:13.638 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:13.638 job1: (groupid=0, jobs=1): err= 0: pid=2075037: Sun Oct 13 14:40:17 2024 00:44:13.638 read: IOPS=6152, BW=24.0MiB/s (25.2MB/s)(24.3MiB/1010msec) 00:44:13.638 slat (nsec): min=909, max=8881.2k, avg=57409.74, stdev=421728.39 00:44:13.638 clat (usec): min=1265, max=34942, avg=8096.34, stdev=4187.51 00:44:13.638 lat (usec): min=1290, max=34951, avg=8153.75, stdev=4221.18 00:44:13.638 clat percentiles (usec): 00:44:13.638 | 1.00th=[ 2147], 5.00th=[ 4146], 10.00th=[ 4883], 20.00th=[ 5407], 00:44:13.638 | 30.00th=[ 5800], 40.00th=[ 6259], 50.00th=[ 6783], 60.00th=[ 7570], 00:44:13.638 | 70.00th=[ 8455], 80.00th=[ 9372], 90.00th=[15533], 95.00th=[17433], 00:44:13.638 | 99.00th=[22152], 99.50th=[24249], 99.90th=[31327], 99.95th=[34866], 00:44:13.638 | 99.99th=[34866] 00:44:13.638 write: IOPS=6590, BW=25.7MiB/s (27.0MB/s)(26.0MiB/1010msec); 0 zone resets 00:44:13.638 slat (nsec): min=1527, max=10958k, avg=82386.66, stdev=538477.82 00:44:13.638 clat (usec): min=714, max=74965, avg=11654.71, stdev=15600.96 00:44:13.638 lat (usec): min=747, max=74973, avg=11737.09, stdev=15710.55 00:44:13.638 clat percentiles (usec): 00:44:13.638 | 1.00th=[ 1565], 5.00th=[ 3556], 10.00th=[ 3949], 20.00th=[ 4686], 00:44:13.638 | 30.00th=[ 5211], 40.00th=[ 5669], 50.00th=[ 6521], 60.00th=[ 7111], 00:44:13.638 | 70.00th=[ 7898], 80.00th=[ 9241], 90.00th=[24511], 95.00th=[59507], 00:44:13.638 | 99.00th=[71828], 99.50th=[73925], 99.90th=[74974], 99.95th=[74974], 00:44:13.638 | 99.99th=[74974] 00:44:13.638 bw ( KiB/s): min=11832, max=40960, per=29.27%, avg=26396.00, stdev=20596.61, samples=2 00:44:13.638 iops : min= 2958, max=10240, avg=6599.00, stdev=5149.15, samples=2 00:44:13.638 lat (usec) : 750=0.01%, 1000=0.12% 00:44:13.638 lat (msec) : 2=1.03%, 4=6.46%, 10=74.12%, 20=11.57%, 50=3.50% 00:44:13.638 lat (msec) : 100=3.19% 00:44:13.638 cpu : usr=4.76%, sys=5.75%, ctx=513, majf=0, minf=1 00:44:13.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:44:13.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:13.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:13.639 issued rwts: total=6214,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:13.639 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:13.639 job2: (groupid=0, jobs=1): err= 0: pid=2075038: Sun Oct 13 14:40:17 2024 00:44:13.639 read: IOPS=5059, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1012msec) 00:44:13.639 slat (nsec): min=988, max=9241.9k, avg=83362.94, stdev=608015.13 00:44:13.639 clat (usec): min=4849, max=32350, avg=10791.76, stdev=3746.56 00:44:13.639 lat (usec): min=4940, max=32355, avg=10875.12, stdev=3779.31 00:44:13.639 clat percentiles (usec): 00:44:13.639 | 1.00th=[ 5145], 5.00th=[ 6587], 10.00th=[ 6915], 20.00th=[ 7570], 00:44:13.639 | 30.00th=[ 8225], 40.00th=[ 9110], 50.00th=[10028], 60.00th=[10683], 00:44:13.639 | 70.00th=[11731], 80.00th=[14484], 90.00th=[15926], 95.00th=[17957], 00:44:13.639 | 99.00th=[22152], 99.50th=[23462], 99.90th=[24773], 99.95th=[24773], 00:44:13.639 | 99.99th=[32375] 00:44:13.639 write: IOPS=5199, BW=20.3MiB/s (21.3MB/s)(20.6MiB/1012msec); 0 zone resets 00:44:13.639 slat (nsec): min=1675, max=12922k, avg=103511.39, stdev=652142.77 00:44:13.639 clat (usec): min=3681, max=61325, avg=13828.95, stdev=11361.08 00:44:13.639 lat (usec): min=3691, max=61337, avg=13932.47, stdev=11438.13 00:44:13.639 clat percentiles (usec): 00:44:13.639 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5800], 20.00th=[ 7046], 00:44:13.639 | 30.00th=[ 7963], 40.00th=[ 8979], 50.00th=[10028], 60.00th=[11731], 00:44:13.639 | 70.00th=[13829], 80.00th=[15926], 90.00th=[23987], 95.00th=[40633], 00:44:13.639 | 99.00th=[59507], 99.50th=[60031], 99.90th=[61080], 99.95th=[61080], 00:44:13.639 | 99.99th=[61080] 00:44:13.639 bw ( KiB/s): min=16384, max=24696, per=22.78%, avg=20540.00, stdev=5877.47, samples=2 00:44:13.639 iops : min= 4096, max= 6174, avg=5135.00, stdev=1469.37, samples=2 00:44:13.639 lat (msec) : 4=0.39%, 10=48.84%, 20=42.95%, 50=5.90%, 100=1.91% 00:44:13.639 cpu : usr=3.76%, sys=5.74%, ctx=316, majf=0, minf=1 00:44:13.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:44:13.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:13.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:13.639 issued rwts: total=5120,5262,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:13.639 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:13.639 job3: (groupid=0, jobs=1): err= 0: pid=2075039: Sun Oct 13 14:40:17 2024 00:44:13.639 read: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec) 00:44:13.639 slat (nsec): min=959, max=25649k, avg=396829.09, stdev=2313809.23 00:44:13.639 clat (usec): min=15019, max=73251, avg=52458.77, stdev=14145.65 00:44:13.639 lat (usec): min=21660, max=74050, avg=52855.60, stdev=14101.02 00:44:13.639 clat percentiles (usec): 00:44:13.639 | 1.00th=[21890], 5.00th=[24773], 10.00th=[30278], 20.00th=[36439], 00:44:13.639 | 30.00th=[46924], 40.00th=[52691], 50.00th=[55313], 60.00th=[57934], 00:44:13.639 | 70.00th=[61080], 80.00th=[64750], 90.00th=[68682], 95.00th=[71828], 00:44:13.639 | 99.00th=[72877], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:44:13.639 | 99.99th=[72877] 00:44:13.639 write: IOPS=1671, BW=6687KiB/s (6847kB/s)(6720KiB/1005msec); 0 zone resets 00:44:13.639 slat (nsec): min=1625, max=10643k, avg=214668.34, stdev=871188.68 00:44:13.639 clat (usec): min=1240, max=68847, avg=27537.75, stdev=19734.84 00:44:13.639 lat (usec): min=1250, max=68854, avg=27752.42, stdev=19862.12 00:44:13.639 clat percentiles (usec): 00:44:13.639 | 1.00th=[ 2376], 5.00th=[ 5473], 10.00th=[ 7570], 20.00th=[ 7832], 00:44:13.639 | 30.00th=[ 9241], 40.00th=[17695], 50.00th=[19006], 60.00th=[29230], 00:44:13.639 | 70.00th=[38536], 80.00th=[51119], 90.00th=[57934], 95.00th=[61604], 00:44:13.639 | 99.00th=[66847], 99.50th=[66847], 99.90th=[68682], 99.95th=[68682], 00:44:13.639 | 99.99th=[68682] 00:44:13.639 bw ( KiB/s): min= 4232, max= 8192, per=6.89%, avg=6212.00, stdev=2800.14, samples=2 00:44:13.639 iops : min= 1058, max= 2048, avg=1553.00, stdev=700.04, samples=2 00:44:13.639 lat (msec) : 2=0.44%, 4=1.03%, 10=14.96%, 20=10.39%, 50=30.04% 00:44:13.639 lat (msec) : 100=43.16% 00:44:13.639 cpu : usr=1.29%, sys=1.89%, ctx=252, majf=0, minf=1 00:44:13.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:44:13.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:13.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:13.639 issued rwts: total=1536,1680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:13.639 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:13.639 00:44:13.639 Run status group 0 (all jobs): 00:44:13.639 READ: bw=83.7MiB/s (87.7MB/s), 6113KiB/s-34.3MiB/s (6260kB/s-35.9MB/s), io=84.7MiB (88.8MB), run=1004-1012msec 00:44:13.639 WRITE: bw=88.1MiB/s (92.3MB/s), 6687KiB/s-35.9MiB/s (6847kB/s-37.6MB/s), io=89.1MiB (93.4MB), run=1004-1012msec 00:44:13.639 00:44:13.639 Disk stats (read/write): 00:44:13.639 nvme0n1: ios=7147/7168, merge=0/0, ticks=49355/42254, in_queue=91609, util=86.47% 00:44:13.639 nvme0n2: ios=6183/6279, merge=0/0, ticks=46874/53261, in_queue=100135, util=88.38% 00:44:13.639 nvme0n3: ios=3694/4096, merge=0/0, ticks=40656/61025, in_queue=101681, util=96.52% 00:44:13.639 nvme0n4: ios=1183/1536, merge=0/0, ticks=16438/13172, in_queue=29610, util=89.53% 00:44:13.639 14:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:44:13.639 [global] 00:44:13.639 thread=1 00:44:13.639 invalidate=1 00:44:13.639 rw=randwrite 00:44:13.639 time_based=1 00:44:13.639 runtime=1 00:44:13.639 ioengine=libaio 00:44:13.639 direct=1 00:44:13.639 bs=4096 00:44:13.639 iodepth=128 00:44:13.639 norandommap=0 00:44:13.639 numjobs=1 00:44:13.639 00:44:13.639 verify_dump=1 00:44:13.639 verify_backlog=512 00:44:13.639 verify_state_save=0 00:44:13.639 do_verify=1 00:44:13.639 verify=crc32c-intel 00:44:13.639 [job0] 00:44:13.639 filename=/dev/nvme0n1 00:44:13.639 [job1] 00:44:13.639 filename=/dev/nvme0n2 00:44:13.639 [job2] 00:44:13.639 filename=/dev/nvme0n3 00:44:13.639 [job3] 00:44:13.639 filename=/dev/nvme0n4 00:44:13.639 Could not set queue depth (nvme0n1) 00:44:13.639 Could not set queue depth (nvme0n2) 00:44:13.639 Could not set queue depth (nvme0n3) 00:44:13.639 Could not set queue depth (nvme0n4) 00:44:13.900 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:13.900 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:13.900 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:13.900 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:13.900 fio-3.35 00:44:13.900 Starting 4 threads 00:44:15.285 00:44:15.285 job0: (groupid=0, jobs=1): err= 0: pid=2075555: Sun Oct 13 14:40:18 2024 00:44:15.285 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:44:15.285 slat (nsec): min=950, max=10996k, avg=123274.60, stdev=839873.70 00:44:15.285 clat (usec): min=2111, max=79498, avg=14726.98, stdev=9233.25 00:44:15.285 lat (usec): min=2138, max=79505, avg=14850.26, stdev=9322.93 00:44:15.285 clat percentiles (usec): 00:44:15.285 | 1.00th=[ 4293], 5.00th=[ 6325], 10.00th=[ 7373], 20.00th=[ 8356], 00:44:15.285 | 30.00th=[ 9503], 40.00th=[11731], 50.00th=[13435], 60.00th=[14877], 00:44:15.285 | 70.00th=[16712], 80.00th=[19268], 90.00th=[21890], 95.00th=[27132], 00:44:15.285 | 99.00th=[59507], 99.50th=[71828], 99.90th=[79168], 99.95th=[79168], 00:44:15.285 | 99.99th=[79168] 00:44:15.285 write: IOPS=3519, BW=13.7MiB/s (14.4MB/s)(13.9MiB/1008msec); 0 zone resets 00:44:15.285 slat (nsec): min=1639, max=15997k, avg=159384.28, stdev=900905.28 00:44:15.285 clat (usec): min=800, max=84988, avg=23247.91, stdev=25409.28 00:44:15.285 lat (usec): min=811, max=84996, avg=23407.29, stdev=25588.65 00:44:15.285 clat percentiles (usec): 00:44:15.285 | 1.00th=[ 1565], 5.00th=[ 3752], 10.00th=[ 5014], 20.00th=[ 6980], 00:44:15.285 | 30.00th=[ 8029], 40.00th=[ 8717], 50.00th=[12125], 60.00th=[14615], 00:44:15.285 | 70.00th=[17433], 80.00th=[44827], 90.00th=[74974], 95.00th=[76022], 00:44:15.285 | 99.00th=[81265], 99.50th=[82314], 99.90th=[82314], 99.95th=[85459], 00:44:15.285 | 99.99th=[85459] 00:44:15.285 bw ( KiB/s): min=11952, max=15408, per=13.02%, avg=13680.00, stdev=2443.76, samples=2 00:44:15.285 iops : min= 2988, max= 3852, avg=3420.00, stdev=610.94, samples=2 00:44:15.285 lat (usec) : 1000=0.09% 00:44:15.285 lat (msec) : 2=0.88%, 4=2.31%, 10=37.46%, 20=38.07%, 50=9.98% 00:44:15.285 lat (msec) : 100=11.21% 00:44:15.285 cpu : usr=1.89%, sys=4.47%, ctx=348, majf=0, minf=1 00:44:15.285 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:44:15.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:15.286 issued rwts: total=3072,3548,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:15.286 job1: (groupid=0, jobs=1): err= 0: pid=2075557: Sun Oct 13 14:40:18 2024 00:44:15.286 read: IOPS=8459, BW=33.0MiB/s (34.7MB/s)(33.1MiB/1003msec) 00:44:15.286 slat (nsec): min=882, max=3554.3k, avg=57090.78, stdev=352703.77 00:44:15.286 clat (usec): min=901, max=13609, avg=7361.50, stdev=1015.08 00:44:15.286 lat (usec): min=4175, max=13611, avg=7418.59, stdev=1048.95 00:44:15.286 clat percentiles (usec): 00:44:15.286 | 1.00th=[ 4752], 5.00th=[ 5800], 10.00th=[ 6259], 20.00th=[ 6718], 00:44:15.286 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7439], 00:44:15.286 | 70.00th=[ 7635], 80.00th=[ 7963], 90.00th=[ 8717], 95.00th=[ 9241], 00:44:15.286 | 99.00th=[10421], 99.50th=[11076], 99.90th=[11338], 99.95th=[11338], 00:44:15.286 | 99.99th=[13566] 00:44:15.286 write: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec); 0 zone resets 00:44:15.286 slat (nsec): min=1491, max=9700.3k, avg=55019.84, stdev=347022.90 00:44:15.286 clat (usec): min=2847, max=16610, avg=7421.35, stdev=1205.29 00:44:15.286 lat (usec): min=2850, max=16858, avg=7476.37, stdev=1220.14 00:44:15.286 clat percentiles (usec): 00:44:15.286 | 1.00th=[ 4359], 5.00th=[ 5800], 10.00th=[ 6652], 20.00th=[ 6980], 00:44:15.286 | 30.00th=[ 7111], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7439], 00:44:15.286 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8356], 95.00th=[ 9110], 00:44:15.286 | 99.00th=[13829], 99.50th=[14091], 99.90th=[14222], 99.95th=[14222], 00:44:15.286 | 99.99th=[16581] 00:44:15.286 bw ( KiB/s): min=33576, max=36056, per=33.15%, avg=34816.00, stdev=1753.62, samples=2 00:44:15.286 iops : min= 8394, max= 9014, avg=8704.00, stdev=438.41, samples=2 00:44:15.286 lat (usec) : 1000=0.01% 00:44:15.286 lat (msec) : 4=0.15%, 10=97.87%, 20=1.97% 00:44:15.286 cpu : usr=6.09%, sys=6.79%, ctx=660, majf=0, minf=1 00:44:15.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:44:15.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:15.286 issued rwts: total=8485,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:15.286 job2: (groupid=0, jobs=1): err= 0: pid=2075564: Sun Oct 13 14:40:18 2024 00:44:15.286 read: IOPS=7634, BW=29.8MiB/s (31.3MB/s)(30.0MiB/1006msec) 00:44:15.286 slat (nsec): min=988, max=6845.4k, avg=63183.15, stdev=463409.90 00:44:15.286 clat (usec): min=3441, max=16903, avg=8425.74, stdev=2125.71 00:44:15.286 lat (usec): min=3447, max=17673, avg=8488.92, stdev=2148.32 00:44:15.286 clat percentiles (usec): 00:44:15.286 | 1.00th=[ 3818], 5.00th=[ 5604], 10.00th=[ 5997], 20.00th=[ 6849], 00:44:15.286 | 30.00th=[ 7177], 40.00th=[ 7701], 50.00th=[ 8029], 60.00th=[ 8356], 00:44:15.286 | 70.00th=[ 9241], 80.00th=[10421], 90.00th=[11600], 95.00th=[12387], 00:44:15.286 | 99.00th=[13960], 99.50th=[14222], 99.90th=[15008], 99.95th=[16319], 00:44:15.286 | 99.99th=[16909] 00:44:15.286 write: IOPS=8126, BW=31.7MiB/s (33.3MB/s)(31.9MiB/1006msec); 0 zone resets 00:44:15.286 slat (nsec): min=1592, max=6591.4k, avg=58536.28, stdev=394824.36 00:44:15.286 clat (usec): min=1238, max=14231, avg=7694.73, stdev=1845.41 00:44:15.286 lat (usec): min=1249, max=14540, avg=7753.27, stdev=1851.50 00:44:15.286 clat percentiles (usec): 00:44:15.286 | 1.00th=[ 4015], 5.00th=[ 5145], 10.00th=[ 5473], 20.00th=[ 6063], 00:44:15.286 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 7635], 60.00th=[ 7898], 00:44:15.286 | 70.00th=[ 8094], 80.00th=[ 8356], 90.00th=[10421], 95.00th=[11207], 00:44:15.286 | 99.00th=[12649], 99.50th=[13173], 99.90th=[13960], 99.95th=[14222], 00:44:15.286 | 99.99th=[14222] 00:44:15.286 bw ( KiB/s): min=31608, max=32768, per=30.64%, avg=32188.00, stdev=820.24, samples=2 00:44:15.286 iops : min= 7902, max= 8192, avg=8047.00, stdev=205.06, samples=2 00:44:15.286 lat (msec) : 2=0.02%, 4=1.14%, 10=78.98%, 20=19.86% 00:44:15.286 cpu : usr=4.68%, sys=8.46%, ctx=630, majf=0, minf=3 00:44:15.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:44:15.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:15.286 issued rwts: total=7680,8175,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:15.286 job3: (groupid=0, jobs=1): err= 0: pid=2075565: Sun Oct 13 14:40:18 2024 00:44:15.286 read: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec) 00:44:15.286 slat (nsec): min=1004, max=11603k, avg=83321.89, stdev=628345.21 00:44:15.286 clat (usec): min=3489, max=28931, avg=10965.34, stdev=3882.65 00:44:15.286 lat (usec): min=5241, max=28936, avg=11048.66, stdev=3914.72 00:44:15.286 clat percentiles (usec): 00:44:15.286 | 1.00th=[ 5735], 5.00th=[ 6652], 10.00th=[ 7177], 20.00th=[ 7898], 00:44:15.286 | 30.00th=[ 8291], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[11076], 00:44:15.286 | 70.00th=[12125], 80.00th=[14091], 90.00th=[15533], 95.00th=[18744], 00:44:15.286 | 99.00th=[24773], 99.50th=[26608], 99.90th=[28967], 99.95th=[28967], 00:44:15.286 | 99.99th=[28967] 00:44:15.286 write: IOPS=5995, BW=23.4MiB/s (24.6MB/s)(23.6MiB/1008msec); 0 zone resets 00:44:15.286 slat (nsec): min=1619, max=12525k, avg=83106.99, stdev=638077.30 00:44:15.286 clat (usec): min=3259, max=39149, avg=10935.16, stdev=5209.74 00:44:15.286 lat (usec): min=3306, max=39152, avg=11018.27, stdev=5244.31 00:44:15.286 clat percentiles (usec): 00:44:15.286 | 1.00th=[ 5604], 5.00th=[ 5997], 10.00th=[ 6325], 20.00th=[ 7242], 00:44:15.286 | 30.00th=[ 7767], 40.00th=[ 8455], 50.00th=[10028], 60.00th=[11076], 00:44:15.286 | 70.00th=[11863], 80.00th=[13698], 90.00th=[16057], 95.00th=[19268], 00:44:15.286 | 99.00th=[37487], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:44:15.286 | 99.99th=[39060] 00:44:15.286 bw ( KiB/s): min=22744, max=24576, per=22.52%, avg=23660.00, stdev=1295.42, samples=2 00:44:15.286 iops : min= 5686, max= 6144, avg=5915.00, stdev=323.85, samples=2 00:44:15.286 lat (msec) : 4=0.05%, 10=50.17%, 20=46.03%, 50=3.75% 00:44:15.286 cpu : usr=4.57%, sys=6.06%, ctx=300, majf=0, minf=1 00:44:15.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:44:15.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:15.286 issued rwts: total=5632,6043,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:15.286 00:44:15.286 Run status group 0 (all jobs): 00:44:15.286 READ: bw=96.4MiB/s (101MB/s), 11.9MiB/s-33.0MiB/s (12.5MB/s-34.7MB/s), io=97.1MiB (102MB), run=1003-1008msec 00:44:15.286 WRITE: bw=103MiB/s (108MB/s), 13.7MiB/s-33.9MiB/s (14.4MB/s-35.5MB/s), io=103MiB (108MB), run=1003-1008msec 00:44:15.286 00:44:15.286 Disk stats (read/write): 00:44:15.286 nvme0n1: ios=2076/2370, merge=0/0, ticks=32010/71983, in_queue=103993, util=100.00% 00:44:15.286 nvme0n2: ios=7200/7284, merge=0/0, ticks=25132/23947, in_queue=49079, util=87.41% 00:44:15.286 nvme0n3: ios=6685/6663, merge=0/0, ticks=52692/47928, in_queue=100620, util=95.66% 00:44:15.286 nvme0n4: ios=4656/5063, merge=0/0, ticks=47189/53325, in_queue=100514, util=100.00% 00:44:15.286 14:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:44:15.286 14:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2075896 00:44:15.286 14:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:44:15.286 14:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:44:15.286 [global] 00:44:15.286 thread=1 00:44:15.286 invalidate=1 00:44:15.286 rw=read 00:44:15.286 time_based=1 00:44:15.286 runtime=10 00:44:15.286 ioengine=libaio 00:44:15.286 direct=1 00:44:15.286 bs=4096 00:44:15.286 iodepth=1 00:44:15.286 norandommap=1 00:44:15.286 numjobs=1 00:44:15.286 00:44:15.286 [job0] 00:44:15.286 filename=/dev/nvme0n1 00:44:15.286 [job1] 00:44:15.286 filename=/dev/nvme0n2 00:44:15.286 [job2] 00:44:15.286 filename=/dev/nvme0n3 00:44:15.286 [job3] 00:44:15.286 filename=/dev/nvme0n4 00:44:15.286 Could not set queue depth (nvme0n1) 00:44:15.286 Could not set queue depth (nvme0n2) 00:44:15.286 Could not set queue depth (nvme0n3) 00:44:15.286 Could not set queue depth (nvme0n4) 00:44:15.547 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:15.547 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:15.547 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:15.547 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:44:15.547 fio-3.35 00:44:15.547 Starting 4 threads 00:44:18.089 14:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:44:18.349 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=253952, buflen=4096 00:44:18.349 fio: pid=2076084, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:18.349 14:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:44:18.609 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:18.609 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:44:18.609 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1654784, buflen=4096 00:44:18.609 fio: pid=2076082, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:18.609 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10190848, buflen=4096 00:44:18.609 fio: pid=2076080, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:18.609 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:18.609 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:44:18.871 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:18.871 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:44:18.871 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1499136, buflen=4096 00:44:18.871 fio: pid=2076081, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:44:18.871 00:44:18.871 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2076080: Sun Oct 13 14:40:22 2024 00:44:18.871 read: IOPS=843, BW=3374KiB/s (3455kB/s)(9952KiB/2950msec) 00:44:18.871 slat (usec): min=6, max=25786, avg=39.93, stdev=551.31 00:44:18.871 clat (usec): min=291, max=3046, avg=1127.81, stdev=106.08 00:44:18.871 lat (usec): min=316, max=26865, avg=1167.75, stdev=561.26 00:44:18.871 clat percentiles (usec): 00:44:18.871 | 1.00th=[ 865], 5.00th=[ 971], 10.00th=[ 1012], 20.00th=[ 1074], 00:44:18.871 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1156], 00:44:18.871 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1237], 95.00th=[ 1270], 00:44:18.871 | 99.00th=[ 1352], 99.50th=[ 1385], 99.90th=[ 1680], 99.95th=[ 2573], 00:44:18.871 | 99.99th=[ 3032] 00:44:18.871 bw ( KiB/s): min= 3384, max= 3520, per=81.74%, avg=3444.80, stdev=55.02, samples=5 00:44:18.871 iops : min= 846, max= 880, avg=861.20, stdev=13.75, samples=5 00:44:18.871 lat (usec) : 500=0.04%, 750=0.28%, 1000=7.55% 00:44:18.871 lat (msec) : 2=92.00%, 4=0.08% 00:44:18.871 cpu : usr=0.88%, sys=2.61%, ctx=2491, majf=0, minf=2 00:44:18.871 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:18.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:18.871 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:18.871 issued rwts: total=2489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:18.871 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:18.871 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2076081: Sun Oct 13 14:40:22 2024 00:44:18.871 read: IOPS=116, BW=464KiB/s (476kB/s)(1464KiB/3152msec) 00:44:18.871 slat (usec): min=7, max=2513, avg=32.77, stdev=129.98 00:44:18.871 clat (usec): min=513, max=42112, avg=8510.55, stdev=15471.07 00:44:18.871 lat (usec): min=547, max=44079, avg=8543.34, stdev=15486.79 00:44:18.871 clat percentiles (usec): 00:44:18.871 | 1.00th=[ 832], 5.00th=[ 938], 10.00th=[ 1004], 20.00th=[ 1074], 00:44:18.871 | 30.00th=[ 1123], 40.00th=[ 1172], 50.00th=[ 1205], 60.00th=[ 1221], 00:44:18.871 | 70.00th=[ 1287], 80.00th=[ 1369], 90.00th=[41157], 95.00th=[41157], 00:44:18.871 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:18.871 | 99.99th=[42206] 00:44:18.871 bw ( KiB/s): min= 96, max= 1296, per=10.87%, avg=458.83, stdev=528.61, samples=6 00:44:18.871 iops : min= 24, max= 324, avg=114.67, stdev=132.11, samples=6 00:44:18.871 lat (usec) : 750=0.27%, 1000=9.26% 00:44:18.871 lat (msec) : 2=71.66%, 20=0.27%, 50=18.26% 00:44:18.871 cpu : usr=0.25%, sys=0.25%, ctx=369, majf=0, minf=2 00:44:18.871 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:18.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:18.871 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:18.871 issued rwts: total=367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:18.871 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:18.871 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2076082: Sun Oct 13 14:40:22 2024 00:44:18.871 read: IOPS=146, BW=583KiB/s (597kB/s)(1616KiB/2773msec) 00:44:18.871 slat (nsec): min=6741, max=66533, avg=26927.81, stdev=4823.99 00:44:18.871 clat (usec): min=413, max=41857, avg=6776.82, stdev=14103.77 00:44:18.871 lat (usec): min=442, max=41886, avg=6803.74, stdev=14104.13 00:44:18.871 clat percentiles (usec): 00:44:18.871 | 1.00th=[ 545], 5.00th=[ 676], 10.00th=[ 742], 20.00th=[ 824], 00:44:18.871 | 30.00th=[ 873], 40.00th=[ 971], 50.00th=[ 1139], 60.00th=[ 1188], 00:44:18.871 | 70.00th=[ 1237], 80.00th=[ 1287], 90.00th=[41157], 95.00th=[41157], 00:44:18.871 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:44:18.871 | 99.99th=[41681] 00:44:18.871 bw ( KiB/s): min= 160, max= 952, per=12.63%, avg=532.80, stdev=317.02, samples=5 00:44:18.871 iops : min= 40, max= 238, avg=133.20, stdev=79.25, samples=5 00:44:18.871 lat (usec) : 500=0.49%, 750=10.86%, 1000=29.14% 00:44:18.871 lat (msec) : 2=44.94%, 50=14.32% 00:44:18.871 cpu : usr=0.11%, sys=0.61%, ctx=405, majf=0, minf=2 00:44:18.871 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:18.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:18.871 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:18.871 issued rwts: total=405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:18.871 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:18.871 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2076084: Sun Oct 13 14:40:22 2024 00:44:18.871 read: IOPS=24, BW=96.0KiB/s (98.3kB/s)(248KiB/2584msec) 00:44:18.871 slat (nsec): min=8519, max=62715, avg=26011.86, stdev=5182.03 00:44:18.871 clat (usec): min=699, max=42101, avg=41279.92, stdev=5240.25 00:44:18.871 lat (usec): min=762, max=42126, avg=41305.94, stdev=5235.54 00:44:18.871 clat percentiles (usec): 00:44:18.871 | 1.00th=[ 701], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:44:18.871 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:44:18.871 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:44:18.871 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:44:18.871 | 99.99th=[42206] 00:44:18.871 bw ( KiB/s): min= 96, max= 96, per=2.28%, avg=96.00, stdev= 0.00, samples=5 00:44:18.871 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:44:18.871 lat (usec) : 750=1.59% 00:44:18.871 lat (msec) : 50=96.83% 00:44:18.871 cpu : usr=0.12%, sys=0.00%, ctx=64, majf=0, minf=1 00:44:18.871 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:18.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:18.872 complete : 0=1.6%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:18.872 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:18.872 latency : target=0, window=0, percentile=100.00%, depth=1 00:44:18.872 00:44:18.872 Run status group 0 (all jobs): 00:44:18.872 READ: bw=4213KiB/s (4314kB/s), 96.0KiB/s-3374KiB/s (98.3kB/s-3455kB/s), io=13.0MiB (13.6MB), run=2584-3152msec 00:44:18.872 00:44:18.872 Disk stats (read/write): 00:44:18.872 nvme0n1: ios=2409/0, merge=0/0, ticks=2628/0, in_queue=2628, util=93.62% 00:44:18.872 nvme0n2: ios=364/0, merge=0/0, ticks=3016/0, in_queue=3016, util=95.60% 00:44:18.872 nvme0n3: ios=344/0, merge=0/0, ticks=2549/0, in_queue=2549, util=96.03% 00:44:18.872 nvme0n4: ios=56/0, merge=0/0, ticks=2308/0, in_queue=2308, util=96.02% 00:44:19.133 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:19.133 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:44:19.393 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:19.393 14:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:44:19.393 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:19.393 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:44:19.654 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:44:19.654 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:44:19.915 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:44:19.915 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2075896 00:44:19.915 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:44:19.915 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:44:19.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:44:19.915 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:44:19.915 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:44:19.915 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:44:19.915 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:19.915 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:44:19.915 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:19.915 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:44:19.915 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:44:19.915 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:44:19.915 nvmf hotplug test: fio failed as expected 00:44:19.915 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:20.175 rmmod nvme_tcp 00:44:20.175 rmmod nvme_fabrics 00:44:20.175 rmmod nvme_keyring 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 2072577 ']' 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 2072577 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2072577 ']' 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2072577 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2072577 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2072577' 00:44:20.175 killing process with pid 2072577 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2072577 00:44:20.175 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2072577 00:44:20.436 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:44:20.436 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:44:20.436 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:44:20.436 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:44:20.436 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:44:20.436 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:44:20.436 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:44:20.436 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:20.436 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:20.436 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:20.436 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:20.436 14:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:22.346 14:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:22.346 00:44:22.346 real 0m28.350s 00:44:22.346 user 2m14.849s 00:44:22.346 sys 0m12.283s 00:44:22.346 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:22.346 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:44:22.346 ************************************ 00:44:22.346 END TEST nvmf_fio_target 00:44:22.346 ************************************ 00:44:22.346 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:44:22.346 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:44:22.346 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:22.346 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:22.608 ************************************ 00:44:22.608 START TEST nvmf_bdevio 00:44:22.608 ************************************ 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:44:22.608 * Looking for test storage... 00:44:22.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:44:22.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:22.608 --rc genhtml_branch_coverage=1 00:44:22.608 --rc genhtml_function_coverage=1 00:44:22.608 --rc genhtml_legend=1 00:44:22.608 --rc geninfo_all_blocks=1 00:44:22.608 --rc geninfo_unexecuted_blocks=1 00:44:22.608 00:44:22.608 ' 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:44:22.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:22.608 --rc genhtml_branch_coverage=1 00:44:22.608 --rc genhtml_function_coverage=1 00:44:22.608 --rc genhtml_legend=1 00:44:22.608 --rc geninfo_all_blocks=1 00:44:22.608 --rc geninfo_unexecuted_blocks=1 00:44:22.608 00:44:22.608 ' 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:44:22.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:22.608 --rc genhtml_branch_coverage=1 00:44:22.608 --rc genhtml_function_coverage=1 00:44:22.608 --rc genhtml_legend=1 00:44:22.608 --rc geninfo_all_blocks=1 00:44:22.608 --rc geninfo_unexecuted_blocks=1 00:44:22.608 00:44:22.608 ' 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:44:22.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:22.608 --rc genhtml_branch_coverage=1 00:44:22.608 --rc genhtml_function_coverage=1 00:44:22.608 --rc genhtml_legend=1 00:44:22.608 --rc geninfo_all_blocks=1 00:44:22.608 --rc geninfo_unexecuted_blocks=1 00:44:22.608 00:44:22.608 ' 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:22.608 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:22.869 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:44:22.870 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:44:22.870 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:44:22.870 14:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:44:31.004 Found 0000:31:00.0 (0x8086 - 0x159b) 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:44:31.004 Found 0000:31:00.1 (0x8086 - 0x159b) 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:44:31.004 Found net devices under 0000:31:00.0: cvl_0_0 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:31.004 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:44:31.005 Found net devices under 0000:31:00.1: cvl_0_1 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:31.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:31.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:44:31.005 00:44:31.005 --- 10.0.0.2 ping statistics --- 00:44:31.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:31.005 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:31.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:31.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:44:31.005 00:44:31.005 --- 10.0.0.1 ping statistics --- 00:44:31.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:31.005 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=2081170 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 2081170 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2081170 ']' 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:31.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:31.005 14:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:31.005 [2024-10-13 14:40:33.860915] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:31.005 [2024-10-13 14:40:33.862070] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:44:31.005 [2024-10-13 14:40:33.862118] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:31.005 [2024-10-13 14:40:34.003871] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:44:31.005 [2024-10-13 14:40:34.051460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:31.005 [2024-10-13 14:40:34.079049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:31.005 [2024-10-13 14:40:34.079098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:31.005 [2024-10-13 14:40:34.079106] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:31.005 [2024-10-13 14:40:34.079113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:31.005 [2024-10-13 14:40:34.079119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:31.005 [2024-10-13 14:40:34.081307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:44:31.005 [2024-10-13 14:40:34.081467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:44:31.005 [2024-10-13 14:40:34.081877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:44:31.005 [2024-10-13 14:40:34.081878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:31.005 [2024-10-13 14:40:34.155218] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:31.005 [2024-10-13 14:40:34.155963] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:31.005 [2024-10-13 14:40:34.156377] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:44:31.005 [2024-10-13 14:40:34.157031] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:31.005 [2024-10-13 14:40:34.157076] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:44:31.005 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:31.005 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:44:31.005 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:44:31.005 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:31.005 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:31.267 [2024-10-13 14:40:34.722836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:31.267 Malloc0 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:31.267 [2024-10-13 14:40:34.811183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:31.267 { 00:44:31.267 "params": { 00:44:31.267 "name": "Nvme$subsystem", 00:44:31.267 "trtype": "$TEST_TRANSPORT", 00:44:31.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:31.267 "adrfam": "ipv4", 00:44:31.267 "trsvcid": "$NVMF_PORT", 00:44:31.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:31.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:31.267 "hdgst": ${hdgst:-false}, 00:44:31.267 "ddgst": ${ddgst:-false} 00:44:31.267 }, 00:44:31.267 "method": "bdev_nvme_attach_controller" 00:44:31.267 } 00:44:31.267 EOF 00:44:31.267 )") 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:44:31.267 14:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:44:31.267 "params": { 00:44:31.267 "name": "Nvme1", 00:44:31.267 "trtype": "tcp", 00:44:31.267 "traddr": "10.0.0.2", 00:44:31.267 "adrfam": "ipv4", 00:44:31.267 "trsvcid": "4420", 00:44:31.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:31.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:31.267 "hdgst": false, 00:44:31.267 "ddgst": false 00:44:31.267 }, 00:44:31.267 "method": "bdev_nvme_attach_controller" 00:44:31.267 }' 00:44:31.267 [2024-10-13 14:40:34.866449] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:44:31.267 [2024-10-13 14:40:34.866527] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081256 ] 00:44:31.528 [2024-10-13 14:40:35.001761] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:44:31.528 [2024-10-13 14:40:35.053042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:31.528 [2024-10-13 14:40:35.085535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:31.528 [2024-10-13 14:40:35.085695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:31.528 [2024-10-13 14:40:35.085695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:31.789 I/O targets: 00:44:31.789 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:44:31.789 00:44:31.789 00:44:31.789 CUnit - A unit testing framework for C - Version 2.1-3 00:44:31.789 http://cunit.sourceforge.net/ 00:44:31.789 00:44:31.789 00:44:31.789 Suite: bdevio tests on: Nvme1n1 00:44:31.789 Test: blockdev write read block ...passed 00:44:31.789 Test: blockdev write zeroes read block ...passed 00:44:32.050 Test: blockdev write zeroes read no split ...passed 00:44:32.050 Test: blockdev write zeroes read split ...passed 00:44:32.050 Test: blockdev write zeroes read split partial ...passed 00:44:32.050 Test: blockdev reset ...[2024-10-13 14:40:35.574110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:32.050 [2024-10-13 14:40:35.574210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1869750 (9): Bad file descriptor 00:44:32.050 [2024-10-13 14:40:35.623468] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:44:32.050 passed 00:44:32.050 Test: blockdev write read 8 blocks ...passed 00:44:32.050 Test: blockdev write read size > 128k ...passed 00:44:32.050 Test: blockdev write read invalid size ...passed 00:44:32.050 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:44:32.050 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:44:32.050 Test: blockdev write read max offset ...passed 00:44:32.310 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:44:32.310 Test: blockdev writev readv 8 blocks ...passed 00:44:32.310 Test: blockdev writev readv 30 x 1block ...passed 00:44:32.310 Test: blockdev writev readv block ...passed 00:44:32.310 Test: blockdev writev readv size > 128k ...passed 00:44:32.310 Test: blockdev writev readv size > 128k in two iovs ...passed 00:44:32.310 Test: blockdev comparev and writev ...[2024-10-13 14:40:35.890351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:32.310 [2024-10-13 14:40:35.890406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:32.310 [2024-10-13 14:40:35.890424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:32.310 [2024-10-13 14:40:35.890433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:32.310 [2024-10-13 14:40:35.891097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:32.310 [2024-10-13 14:40:35.891110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:32.310 [2024-10-13 14:40:35.891124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:32.310 [2024-10-13 14:40:35.891132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:32.310 [2024-10-13 14:40:35.891759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:32.310 [2024-10-13 14:40:35.891770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:32.310 [2024-10-13 14:40:35.891784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:32.310 [2024-10-13 14:40:35.891793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:32.310 [2024-10-13 14:40:35.892466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:32.310 [2024-10-13 14:40:35.892484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:32.311 [2024-10-13 14:40:35.892498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:32.311 [2024-10-13 14:40:35.892506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:32.311 passed 00:44:32.311 Test: blockdev nvme passthru rw ...passed 00:44:32.311 Test: blockdev nvme passthru vendor specific ...[2024-10-13 14:40:35.977057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:32.311 [2024-10-13 14:40:35.977077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:32.311 [2024-10-13 14:40:35.977476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:32.311 [2024-10-13 14:40:35.977487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:32.311 [2024-10-13 14:40:35.977897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:32.311 [2024-10-13 14:40:35.977907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:32.311 [2024-10-13 14:40:35.978363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:32.311 [2024-10-13 14:40:35.978374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:32.311 passed 00:44:32.311 Test: blockdev nvme admin passthru ...passed 00:44:32.577 Test: blockdev copy ...passed 00:44:32.577 00:44:32.577 Run Summary: Type Total Ran Passed Failed Inactive 00:44:32.577 suites 1 1 n/a 0 0 00:44:32.577 tests 23 23 23 0 0 00:44:32.577 asserts 152 152 152 0 n/a 00:44:32.577 00:44:32.577 Elapsed time = 1.277 seconds 00:44:32.577 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:32.577 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:32.577 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:32.577 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:32.577 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:44:32.577 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:44:32.577 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:44:32.577 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:44:32.577 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:32.577 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:44:32.577 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:32.577 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:32.577 rmmod nvme_tcp 00:44:32.577 rmmod nvme_fabrics 00:44:32.577 rmmod nvme_keyring 00:44:32.577 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:32.577 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:44:32.577 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:44:32.577 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 2081170 ']' 00:44:32.577 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 2081170 00:44:32.578 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2081170 ']' 00:44:32.578 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2081170 00:44:32.578 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:44:32.578 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:32.578 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2081170 00:44:32.907 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:44:32.907 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:44:32.907 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2081170' 00:44:32.907 killing process with pid 2081170 00:44:32.907 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2081170 00:44:32.907 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2081170 00:44:32.907 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:44:32.907 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:44:32.907 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:44:32.907 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:44:32.907 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:44:32.907 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:44:32.907 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:44:32.908 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:32.908 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:32.908 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:32.908 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:32.908 14:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:35.489 14:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:35.489 00:44:35.489 real 0m12.490s 00:44:35.489 user 0m10.906s 00:44:35.489 sys 0m6.479s 00:44:35.489 14:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:35.489 14:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:44:35.489 ************************************ 00:44:35.489 END TEST nvmf_bdevio 00:44:35.489 ************************************ 00:44:35.489 14:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:44:35.489 00:44:35.489 real 5m0.745s 00:44:35.489 user 10m15.113s 00:44:35.489 sys 2m6.600s 00:44:35.489 14:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:35.489 14:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:35.489 ************************************ 00:44:35.489 END TEST nvmf_target_core_interrupt_mode 00:44:35.489 ************************************ 00:44:35.489 14:40:38 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:44:35.489 14:40:38 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:44:35.489 14:40:38 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:35.489 14:40:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:35.489 ************************************ 00:44:35.489 START TEST nvmf_interrupt 00:44:35.489 ************************************ 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:44:35.489 * Looking for test storage... 00:44:35.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:44:35.489 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:44:35.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.490 --rc genhtml_branch_coverage=1 00:44:35.490 --rc genhtml_function_coverage=1 00:44:35.490 --rc genhtml_legend=1 00:44:35.490 --rc geninfo_all_blocks=1 00:44:35.490 --rc geninfo_unexecuted_blocks=1 00:44:35.490 00:44:35.490 ' 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:44:35.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.490 --rc genhtml_branch_coverage=1 00:44:35.490 --rc genhtml_function_coverage=1 00:44:35.490 --rc genhtml_legend=1 00:44:35.490 --rc geninfo_all_blocks=1 00:44:35.490 --rc geninfo_unexecuted_blocks=1 00:44:35.490 00:44:35.490 ' 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:44:35.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.490 --rc genhtml_branch_coverage=1 00:44:35.490 --rc genhtml_function_coverage=1 00:44:35.490 --rc genhtml_legend=1 00:44:35.490 --rc geninfo_all_blocks=1 00:44:35.490 --rc geninfo_unexecuted_blocks=1 00:44:35.490 00:44:35.490 ' 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:44:35.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.490 --rc genhtml_branch_coverage=1 00:44:35.490 --rc genhtml_function_coverage=1 00:44:35.490 --rc genhtml_legend=1 00:44:35.490 --rc geninfo_all_blocks=1 00:44:35.490 --rc geninfo_unexecuted_blocks=1 00:44:35.490 00:44:35.490 ' 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:44:35.490 14:40:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:44:43.633 Found 0000:31:00.0 (0x8086 - 0x159b) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:44:43.633 Found 0000:31:00.1 (0x8086 - 0x159b) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:44:43.633 Found net devices under 0000:31:00.0: cvl_0_0 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:44:43.633 Found net devices under 0000:31:00.1: cvl_0_1 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:43.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:43.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.520 ms 00:44:43.633 00:44:43.633 --- 10.0.0.2 ping statistics --- 00:44:43.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:43.633 rtt min/avg/max/mdev = 0.520/0.520/0.520/0.000 ms 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:43.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:43.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:44:43.633 00:44:43.633 --- 10.0.0.1 ping statistics --- 00:44:43.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:43.633 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:44:43.633 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:44:43.634 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:43.634 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:44:43.634 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:44:43.634 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:43.634 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:44:43.634 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:44:43.634 14:40:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:44:43.634 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:44:43.634 14:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:43.634 14:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:43.634 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=2085845 00:44:43.634 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 2085845 00:44:43.634 14:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:44:43.634 14:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 2085845 ']' 00:44:43.634 14:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:43.634 14:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:43.634 14:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:43.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:43.634 14:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:43.634 14:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:43.634 [2024-10-13 14:40:46.517077] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:43.634 [2024-10-13 14:40:46.518077] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:44:43.634 [2024-10-13 14:40:46.518115] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:43.634 [2024-10-13 14:40:46.655084] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:44:43.634 [2024-10-13 14:40:46.701506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:43.634 [2024-10-13 14:40:46.718861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:43.634 [2024-10-13 14:40:46.718890] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:43.634 [2024-10-13 14:40:46.718898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:43.634 [2024-10-13 14:40:46.718905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:43.634 [2024-10-13 14:40:46.718911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:43.634 [2024-10-13 14:40:46.720098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:43.634 [2024-10-13 14:40:46.720118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:43.634 [2024-10-13 14:40:46.768268] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:43.634 [2024-10-13 14:40:46.768813] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:43.634 [2024-10-13 14:40:46.769166] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:43.634 14:40:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:43.634 14:40:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:44:43.634 14:40:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:44:43.634 14:40:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:43.634 14:40:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:44:43.895 5000+0 records in 00:44:43.895 5000+0 records out 00:44:43.895 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0177913 s, 576 MB/s 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:43.895 AIO0 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:43.895 [2024-10-13 14:40:47.428994] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:43.895 [2024-10-13 14:40:47.473360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2085845 0 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2085845 0 idle 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2085845 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2085845 -w 256 00:44:43.895 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2085845 root 20 0 128.2g 43776 31104 S 0.0 0.0 0:00.22 reactor_0' 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2085845 root 20 0 128.2g 43776 31104 S 0.0 0.0 0:00.22 reactor_0 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2085845 1 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2085845 1 idle 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2085845 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2085845 -w 256 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2085935 root 20 0 128.2g 43776 31104 S 0.0 0.0 0:00.00 reactor_1' 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2085935 root 20 0 128.2g 43776 31104 S 0.0 0.0 0:00.00 reactor_1 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2086036 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2085845 0 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2085845 0 busy 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2085845 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:44.156 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:44.416 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2085845 -w 256 00:44:44.416 14:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:44.416 14:40:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2085845 root 20 0 128.2g 43776 31104 S 13.3 0.0 0:00.24 reactor_0' 00:44:44.416 14:40:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2085845 root 20 0 128.2g 43776 31104 S 13.3 0.0 0:00.24 reactor_0 00:44:44.416 14:40:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:44.416 14:40:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:44.416 14:40:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=13.3 00:44:44.416 14:40:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=13 00:44:44.416 14:40:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:44:44.416 14:40:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:44:44.416 14:40:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:44:45.358 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:44:45.358 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:45.358 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2085845 -w 256 00:44:45.358 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2085845 root 20 0 128.2g 43776 31104 R 99.9 0.0 0:02.59 reactor_0' 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2085845 root 20 0 128.2g 43776 31104 R 99.9 0.0 0:02.59 reactor_0 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2085845 1 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2085845 1 busy 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2085845 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2085845 -w 256 00:44:45.619 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:45.879 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2085935 root 20 0 128.2g 43776 31104 R 99.9 0.0 0:01.37 reactor_1' 00:44:45.879 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2085935 root 20 0 128.2g 43776 31104 R 99.9 0.0 0:01.37 reactor_1 00:44:45.879 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:45.879 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:45.879 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:44:45.879 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:44:45.879 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:44:45.880 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:44:45.880 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:44:45.880 14:40:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:45.880 14:40:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2086036 00:44:55.978 Initializing NVMe Controllers 00:44:55.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:44:55.978 Controller IO queue size 256, less than required. 00:44:55.978 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:44:55.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:44:55.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:44:55.978 Initialization complete. Launching workers. 00:44:55.978 ======================================================== 00:44:55.978 Latency(us) 00:44:55.978 Device Information : IOPS MiB/s Average min max 00:44:55.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18480.49 72.19 13857.50 3347.71 33666.55 00:44:55.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19088.89 74.57 13412.42 7418.13 30494.85 00:44:55.979 ======================================================== 00:44:55.979 Total : 37569.38 146.76 13631.35 3347.71 33666.55 00:44:55.979 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2085845 0 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2085845 0 idle 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2085845 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2085845 -w 256 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2085845 root 20 0 128.2g 43776 31104 S 6.7 0.0 0:20.19 reactor_0' 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2085845 root 20 0 128.2g 43776 31104 S 6.7 0.0 0:20.19 reactor_0 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2085845 1 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2085845 1 idle 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2085845 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2085845 -w 256 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2085935 root 20 0 128.2g 43776 31104 S 0.0 0.0 0:09.97 reactor_1' 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2085935 root 20 0 128.2g 43776 31104 S 0.0 0.0 0:09.97 reactor_1 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:55.979 14:40:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:44:55.979 14:40:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:44:55.979 14:40:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:44:55.979 14:40:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:44:55.979 14:40:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:44:55.979 14:40:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2085845 0 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2085845 0 idle 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2085845 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2085845 -w 256 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2085845 root 20 0 128.2g 78336 31104 S 6.7 0.1 0:20.57 reactor_0' 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2085845 root 20 0 128.2g 78336 31104 S 6.7 0.1 0:20.57 reactor_0 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2085845 1 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2085845 1 idle 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2085845 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2085845 -w 256 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2085935 root 20 0 128.2g 78336 31104 S 0.0 0.1 0:10.11 reactor_1' 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2085935 root 20 0 128.2g 78336 31104 S 0.0 0.1 0:10.11 reactor_1 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:44:57.890 14:41:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:44:58.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:44:58.150 14:41:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:58.151 rmmod nvme_tcp 00:44:58.151 rmmod nvme_fabrics 00:44:58.151 rmmod nvme_keyring 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 2085845 ']' 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 2085845 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 2085845 ']' 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 2085845 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2085845 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2085845' 00:44:58.151 killing process with pid 2085845 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 2085845 00:44:58.151 14:41:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 2085845 00:44:58.411 14:41:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:44:58.411 14:41:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:44:58.411 14:41:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:44:58.411 14:41:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:44:58.411 14:41:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:44:58.411 14:41:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:44:58.411 14:41:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:44:58.411 14:41:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:58.411 14:41:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:58.411 14:41:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:58.411 14:41:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:58.411 14:41:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:00.956 14:41:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:00.956 00:45:00.956 real 0m25.372s 00:45:00.956 user 0m40.070s 00:45:00.956 sys 0m9.782s 00:45:00.956 14:41:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:00.956 14:41:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:00.956 ************************************ 00:45:00.956 END TEST nvmf_interrupt 00:45:00.956 ************************************ 00:45:00.956 00:45:00.956 real 38m23.098s 00:45:00.956 user 92m4.748s 00:45:00.956 sys 11m28.246s 00:45:00.956 14:41:04 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:00.956 14:41:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:00.956 ************************************ 00:45:00.956 END TEST nvmf_tcp 00:45:00.956 ************************************ 00:45:00.956 14:41:04 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:45:00.956 14:41:04 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:45:00.956 14:41:04 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:45:00.956 14:41:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:00.956 14:41:04 -- common/autotest_common.sh@10 -- # set +x 00:45:00.956 ************************************ 00:45:00.956 START TEST spdkcli_nvmf_tcp 00:45:00.956 ************************************ 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:45:00.956 * Looking for test storage... 00:45:00.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:45:00.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:00.956 --rc genhtml_branch_coverage=1 00:45:00.956 --rc genhtml_function_coverage=1 00:45:00.956 --rc genhtml_legend=1 00:45:00.956 --rc geninfo_all_blocks=1 00:45:00.956 --rc geninfo_unexecuted_blocks=1 00:45:00.956 00:45:00.956 ' 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:45:00.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:00.956 --rc genhtml_branch_coverage=1 00:45:00.956 --rc genhtml_function_coverage=1 00:45:00.956 --rc genhtml_legend=1 00:45:00.956 --rc geninfo_all_blocks=1 00:45:00.956 --rc geninfo_unexecuted_blocks=1 00:45:00.956 00:45:00.956 ' 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:45:00.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:00.956 --rc genhtml_branch_coverage=1 00:45:00.956 --rc genhtml_function_coverage=1 00:45:00.956 --rc genhtml_legend=1 00:45:00.956 --rc geninfo_all_blocks=1 00:45:00.956 --rc geninfo_unexecuted_blocks=1 00:45:00.956 00:45:00.956 ' 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:45:00.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:00.956 --rc genhtml_branch_coverage=1 00:45:00.956 --rc genhtml_function_coverage=1 00:45:00.956 --rc genhtml_legend=1 00:45:00.956 --rc geninfo_all_blocks=1 00:45:00.956 --rc geninfo_unexecuted_blocks=1 00:45:00.956 00:45:00.956 ' 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:00.956 14:41:04 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:00.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2089257 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2089257 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 2089257 ']' 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:00.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:00.957 14:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:00.957 [2024-10-13 14:41:04.481884] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:45:00.957 [2024-10-13 14:41:04.481953] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089257 ] 00:45:00.957 [2024-10-13 14:41:04.616568] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:45:01.218 [2024-10-13 14:41:04.665863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:01.218 [2024-10-13 14:41:04.695896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:01.218 [2024-10-13 14:41:04.695902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:01.789 14:41:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:01.789 14:41:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:45:01.789 14:41:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:45:01.789 14:41:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:01.789 14:41:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:01.789 14:41:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:45:01.789 14:41:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:45:01.789 14:41:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:45:01.789 14:41:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:01.789 14:41:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:01.789 14:41:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:45:01.789 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:45:01.789 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:45:01.789 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:45:01.789 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:45:01.789 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:45:01.789 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:45:01.789 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:45:01.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:45:01.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:45:01.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:01.789 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:01.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:45:01.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:01.789 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:01.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:45:01.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:01.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:45:01.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:45:01.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:01.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:45:01.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:45:01.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:45:01.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:45:01.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:01.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:45:01.789 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:45:01.789 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:45:01.789 ' 00:45:04.333 [2024-10-13 14:41:08.014086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:05.716 [2024-10-13 14:41:09.371221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:45:08.257 [2024-10-13 14:41:11.892470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:45:10.800 [2024-10-13 14:41:14.121537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:45:12.183 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:45:12.183 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:45:12.183 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:45:12.183 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:45:12.183 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:45:12.183 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:45:12.183 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:45:12.183 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:45:12.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:45:12.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:45:12.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:45:12.183 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:12.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:45:12.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:45:12.183 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:12.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:45:12.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:45:12.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:45:12.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:45:12.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:12.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:45:12.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:45:12.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:45:12.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:45:12.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:45:12.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:45:12.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:45:12.183 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:45:12.444 14:41:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:45:12.444 14:41:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:12.444 14:41:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:12.444 14:41:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:45:12.444 14:41:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:12.444 14:41:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:12.444 14:41:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:45:12.444 14:41:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:45:12.704 14:41:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:45:12.704 14:41:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:45:12.704 14:41:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:45:12.704 14:41:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:12.704 14:41:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:12.965 14:41:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:45:12.965 14:41:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:12.965 14:41:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:12.965 14:41:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:45:12.965 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:45:12.965 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:45:12.965 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:45:12.965 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:45:12.965 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:45:12.965 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:45:12.965 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:45:12.965 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:45:12.965 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:45:12.965 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:45:12.965 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:45:12.965 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:45:12.965 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:45:12.965 ' 00:45:18.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:45:18.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:45:18.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:45:18.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:45:18.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:45:18.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:45:18.255 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:45:18.255 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:45:18.255 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:45:18.255 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:45:18.255 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:45:18.255 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:45:18.255 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:45:18.255 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:45:18.516 14:41:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:45:18.516 14:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:18.516 14:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:18.516 14:41:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2089257 00:45:18.516 14:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2089257 ']' 00:45:18.516 14:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2089257 00:45:18.516 14:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:45:18.516 14:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:18.516 14:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2089257 00:45:18.516 14:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:18.516 14:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:18.516 14:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2089257' 00:45:18.516 killing process with pid 2089257 00:45:18.516 14:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 2089257 00:45:18.516 14:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 2089257 00:45:18.777 14:41:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:45:18.777 14:41:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:45:18.777 14:41:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2089257 ']' 00:45:18.777 14:41:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2089257 00:45:18.777 14:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2089257 ']' 00:45:18.777 14:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2089257 00:45:18.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2089257) - No such process 00:45:18.777 14:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 2089257 is not found' 00:45:18.777 Process with pid 2089257 is not found 00:45:18.777 14:41:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:45:18.777 14:41:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:45:18.777 14:41:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:45:18.777 00:45:18.777 real 0m18.082s 00:45:18.777 user 0m39.992s 00:45:18.777 sys 0m0.905s 00:45:18.777 14:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:18.777 14:41:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:18.777 ************************************ 00:45:18.777 END TEST spdkcli_nvmf_tcp 00:45:18.777 ************************************ 00:45:18.777 14:41:22 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:45:18.777 14:41:22 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:45:18.777 14:41:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:18.777 14:41:22 -- common/autotest_common.sh@10 -- # set +x 00:45:18.777 ************************************ 00:45:18.777 START TEST nvmf_identify_passthru 00:45:18.777 ************************************ 00:45:18.777 14:41:22 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:45:18.777 * Looking for test storage... 00:45:18.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:18.777 14:41:22 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:45:18.778 14:41:22 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:45:18.778 14:41:22 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:45:19.039 14:41:22 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:19.039 14:41:22 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:45:19.039 14:41:22 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:19.039 14:41:22 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:45:19.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:19.039 --rc genhtml_branch_coverage=1 00:45:19.039 --rc genhtml_function_coverage=1 00:45:19.039 --rc genhtml_legend=1 00:45:19.039 --rc geninfo_all_blocks=1 00:45:19.039 --rc geninfo_unexecuted_blocks=1 00:45:19.039 00:45:19.039 ' 00:45:19.039 14:41:22 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:45:19.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:19.039 --rc genhtml_branch_coverage=1 00:45:19.039 --rc genhtml_function_coverage=1 00:45:19.039 --rc genhtml_legend=1 00:45:19.039 --rc geninfo_all_blocks=1 00:45:19.039 --rc geninfo_unexecuted_blocks=1 00:45:19.039 00:45:19.039 ' 00:45:19.039 14:41:22 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:45:19.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:19.039 --rc genhtml_branch_coverage=1 00:45:19.040 --rc genhtml_function_coverage=1 00:45:19.040 --rc genhtml_legend=1 00:45:19.040 --rc geninfo_all_blocks=1 00:45:19.040 --rc geninfo_unexecuted_blocks=1 00:45:19.040 00:45:19.040 ' 00:45:19.040 14:41:22 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:45:19.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:19.040 --rc genhtml_branch_coverage=1 00:45:19.040 --rc genhtml_function_coverage=1 00:45:19.040 --rc genhtml_legend=1 00:45:19.040 --rc geninfo_all_blocks=1 00:45:19.040 --rc geninfo_unexecuted_blocks=1 00:45:19.040 00:45:19.040 ' 00:45:19.040 14:41:22 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:19.040 14:41:22 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:45:19.040 14:41:22 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:19.040 14:41:22 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:19.040 14:41:22 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:19.040 14:41:22 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:19.040 14:41:22 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:19.040 14:41:22 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:19.040 14:41:22 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:45:19.040 14:41:22 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:19.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:19.040 14:41:22 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:19.040 14:41:22 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:45:19.040 14:41:22 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:19.040 14:41:22 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:19.040 14:41:22 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:19.040 14:41:22 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:19.040 14:41:22 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:19.040 14:41:22 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:19.040 14:41:22 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:45:19.040 14:41:22 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:19.040 14:41:22 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:19.040 14:41:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:19.040 14:41:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:45:19.040 14:41:22 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:45:19.040 14:41:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:45:27.181 Found 0000:31:00.0 (0x8086 - 0x159b) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:45:27.181 Found 0000:31:00.1 (0x8086 - 0x159b) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:45:27.181 Found net devices under 0000:31:00.0: cvl_0_0 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:45:27.181 Found net devices under 0000:31:00.1: cvl_0_1 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:27.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:27.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:45:27.181 00:45:27.181 --- 10.0.0.2 ping statistics --- 00:45:27.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:27.181 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:27.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:27.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:45:27.181 00:45:27.181 --- 10.0.0.1 ping statistics --- 00:45:27.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:27.181 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:45:27.181 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:45:27.182 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:27.182 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:45:27.182 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:45:27.182 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:27.182 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:45:27.182 14:41:29 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:45:27.182 14:41:29 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:45:27.182 14:41:29 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:27.182 14:41:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:27.182 14:41:29 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:45:27.182 14:41:29 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:45:27.182 14:41:29 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:45:27.182 14:41:29 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:45:27.182 14:41:29 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:45:27.182 14:41:29 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:45:27.182 14:41:29 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:45:27.182 14:41:29 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:45:27.182 14:41:29 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:45:27.182 14:41:29 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:45:27.182 14:41:30 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:45:27.182 14:41:30 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:45:27.182 14:41:30 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:45:27.182 14:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:45:27.182 14:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:45:27.182 14:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:45:27.182 14:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:45:27.182 14:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:45:27.182 14:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:45:27.182 14:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:45:27.182 14:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:45:27.182 14:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:45:27.754 14:41:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:45:27.754 14:41:31 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:45:27.754 14:41:31 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:27.754 14:41:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:27.754 14:41:31 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:45:27.754 14:41:31 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:27.754 14:41:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:27.754 14:41:31 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2096662 00:45:27.754 14:41:31 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:45:27.754 14:41:31 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:45:27.754 14:41:31 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2096662 00:45:27.754 14:41:31 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 2096662 ']' 00:45:27.754 14:41:31 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:27.754 14:41:31 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:27.754 14:41:31 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:27.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:27.754 14:41:31 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:27.754 14:41:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:27.754 [2024-10-13 14:41:31.350932] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:45:27.754 [2024-10-13 14:41:31.350998] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:28.017 [2024-10-13 14:41:31.491683] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:45:28.017 [2024-10-13 14:41:31.540680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:28.017 [2024-10-13 14:41:31.569554] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:28.017 [2024-10-13 14:41:31.569598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:28.017 [2024-10-13 14:41:31.569606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:28.017 [2024-10-13 14:41:31.569613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:28.017 [2024-10-13 14:41:31.569619] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:28.017 [2024-10-13 14:41:31.571980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:28.017 [2024-10-13 14:41:31.572139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:28.017 [2024-10-13 14:41:31.572196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:28.017 [2024-10-13 14:41:31.572196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:45:28.590 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:28.590 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:45:28.590 14:41:32 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:45:28.590 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:28.590 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:28.590 INFO: Log level set to 20 00:45:28.590 INFO: Requests: 00:45:28.590 { 00:45:28.590 "jsonrpc": "2.0", 00:45:28.590 "method": "nvmf_set_config", 00:45:28.590 "id": 1, 00:45:28.590 "params": { 00:45:28.590 "admin_cmd_passthru": { 00:45:28.590 "identify_ctrlr": true 00:45:28.590 } 00:45:28.590 } 00:45:28.590 } 00:45:28.590 00:45:28.590 INFO: response: 00:45:28.590 { 00:45:28.590 "jsonrpc": "2.0", 00:45:28.590 "id": 1, 00:45:28.590 "result": true 00:45:28.590 } 00:45:28.590 00:45:28.590 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:28.590 14:41:32 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:45:28.590 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:28.590 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:28.590 INFO: Setting log level to 20 00:45:28.590 INFO: Setting log level to 20 00:45:28.590 INFO: Log level set to 20 00:45:28.590 INFO: Log level set to 20 00:45:28.590 INFO: Requests: 00:45:28.590 { 00:45:28.590 "jsonrpc": "2.0", 00:45:28.590 "method": "framework_start_init", 00:45:28.590 "id": 1 00:45:28.590 } 00:45:28.590 00:45:28.590 INFO: Requests: 00:45:28.590 { 00:45:28.590 "jsonrpc": "2.0", 00:45:28.590 "method": "framework_start_init", 00:45:28.590 "id": 1 00:45:28.590 } 00:45:28.590 00:45:28.590 [2024-10-13 14:41:32.217179] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:45:28.590 INFO: response: 00:45:28.590 { 00:45:28.590 "jsonrpc": "2.0", 00:45:28.590 "id": 1, 00:45:28.590 "result": true 00:45:28.590 } 00:45:28.590 00:45:28.590 INFO: response: 00:45:28.590 { 00:45:28.590 "jsonrpc": "2.0", 00:45:28.590 "id": 1, 00:45:28.590 "result": true 00:45:28.590 } 00:45:28.590 00:45:28.590 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:28.590 14:41:32 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:28.590 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:28.590 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:28.590 INFO: Setting log level to 40 00:45:28.590 INFO: Setting log level to 40 00:45:28.590 INFO: Setting log level to 40 00:45:28.590 [2024-10-13 14:41:32.230471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:28.590 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:28.590 14:41:32 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:45:28.590 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:28.590 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:28.590 14:41:32 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:45:28.590 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:28.590 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:29.160 Nvme0n1 00:45:29.160 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:29.160 14:41:32 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:45:29.160 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:29.160 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:29.160 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:29.160 14:41:32 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:45:29.160 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:29.160 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:29.160 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:29.160 14:41:32 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:29.160 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:29.160 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:29.160 [2024-10-13 14:41:32.613703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:29.160 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:29.160 14:41:32 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:45:29.160 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:29.160 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:29.160 [ 00:45:29.160 { 00:45:29.160 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:45:29.160 "subtype": "Discovery", 00:45:29.160 "listen_addresses": [], 00:45:29.160 "allow_any_host": true, 00:45:29.160 "hosts": [] 00:45:29.160 }, 00:45:29.160 { 00:45:29.160 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:45:29.160 "subtype": "NVMe", 00:45:29.160 "listen_addresses": [ 00:45:29.160 { 00:45:29.160 "trtype": "TCP", 00:45:29.160 "adrfam": "IPv4", 00:45:29.160 "traddr": "10.0.0.2", 00:45:29.160 "trsvcid": "4420" 00:45:29.160 } 00:45:29.160 ], 00:45:29.160 "allow_any_host": true, 00:45:29.160 "hosts": [], 00:45:29.160 "serial_number": "SPDK00000000000001", 00:45:29.160 "model_number": "SPDK bdev Controller", 00:45:29.160 "max_namespaces": 1, 00:45:29.160 "min_cntlid": 1, 00:45:29.160 "max_cntlid": 65519, 00:45:29.160 "namespaces": [ 00:45:29.160 { 00:45:29.160 "nsid": 1, 00:45:29.160 "bdev_name": "Nvme0n1", 00:45:29.160 "name": "Nvme0n1", 00:45:29.160 "nguid": "3634473052605494002538450000002B", 00:45:29.160 "uuid": "36344730-5260-5494-0025-38450000002b" 00:45:29.160 } 00:45:29.160 ] 00:45:29.160 } 00:45:29.160 ] 00:45:29.160 14:41:32 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:29.161 14:41:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:45:29.161 14:41:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:45:29.161 14:41:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:45:29.420 14:41:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:45:29.420 14:41:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:45:29.420 14:41:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:45:29.420 14:41:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:45:29.680 14:41:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:45:29.680 14:41:33 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:45:29.680 14:41:33 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:45:29.680 14:41:33 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:29.680 14:41:33 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:29.680 14:41:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:29.680 14:41:33 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:29.680 14:41:33 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:45:29.680 14:41:33 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:45:29.680 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:45:29.680 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:45:29.680 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:29.680 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:45:29.680 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:29.680 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:29.680 rmmod nvme_tcp 00:45:29.680 rmmod nvme_fabrics 00:45:29.680 rmmod nvme_keyring 00:45:29.680 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:29.680 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:45:29.680 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:45:29.680 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 2096662 ']' 00:45:29.680 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 2096662 00:45:29.681 14:41:33 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 2096662 ']' 00:45:29.681 14:41:33 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 2096662 00:45:29.681 14:41:33 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:45:29.681 14:41:33 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:29.681 14:41:33 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2096662 00:45:29.942 14:41:33 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:29.942 14:41:33 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:29.942 14:41:33 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2096662' 00:45:29.942 killing process with pid 2096662 00:45:29.942 14:41:33 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 2096662 00:45:29.942 14:41:33 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 2096662 00:45:30.203 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:45:30.203 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:45:30.203 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:45:30.203 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:45:30.203 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:45:30.203 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:45:30.203 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:45:30.203 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:30.203 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:30.203 14:41:33 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:30.203 14:41:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:30.203 14:41:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:32.118 14:41:35 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:32.118 00:45:32.118 real 0m13.389s 00:45:32.118 user 0m10.557s 00:45:32.118 sys 0m6.596s 00:45:32.118 14:41:35 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:32.118 14:41:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:45:32.118 ************************************ 00:45:32.118 END TEST nvmf_identify_passthru 00:45:32.118 ************************************ 00:45:32.118 14:41:35 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:45:32.118 14:41:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:32.118 14:41:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:32.118 14:41:35 -- common/autotest_common.sh@10 -- # set +x 00:45:32.118 ************************************ 00:45:32.118 START TEST nvmf_dif 00:45:32.118 ************************************ 00:45:32.118 14:41:35 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:45:32.379 * Looking for test storage... 00:45:32.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:32.379 14:41:35 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:45:32.379 14:41:35 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:45:32.379 14:41:35 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:45:32.379 14:41:35 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:45:32.379 14:41:35 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:32.379 14:41:35 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:32.379 14:41:35 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:32.379 14:41:35 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:45:32.379 14:41:35 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:45:32.379 14:41:35 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:45:32.379 14:41:35 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:45:32.379 14:41:35 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:45:32.379 14:41:35 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:45:32.379 14:41:35 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:45:32.379 14:41:35 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:32.379 14:41:35 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:45:32.379 14:41:35 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:45:32.379 14:41:35 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:32.380 14:41:35 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:32.380 14:41:35 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:45:32.380 14:41:36 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:45:32.380 14:41:36 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:32.380 14:41:36 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:45:32.380 14:41:36 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:45:32.380 14:41:36 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:45:32.380 14:41:36 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:45:32.380 14:41:36 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:32.380 14:41:36 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:45:32.380 14:41:36 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:45:32.380 14:41:36 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:32.380 14:41:36 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:32.380 14:41:36 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:45:32.380 14:41:36 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:32.380 14:41:36 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:45:32.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:32.380 --rc genhtml_branch_coverage=1 00:45:32.380 --rc genhtml_function_coverage=1 00:45:32.380 --rc genhtml_legend=1 00:45:32.380 --rc geninfo_all_blocks=1 00:45:32.380 --rc geninfo_unexecuted_blocks=1 00:45:32.380 00:45:32.380 ' 00:45:32.380 14:41:36 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:45:32.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:32.380 --rc genhtml_branch_coverage=1 00:45:32.380 --rc genhtml_function_coverage=1 00:45:32.380 --rc genhtml_legend=1 00:45:32.380 --rc geninfo_all_blocks=1 00:45:32.380 --rc geninfo_unexecuted_blocks=1 00:45:32.380 00:45:32.380 ' 00:45:32.380 14:41:36 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:45:32.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:32.380 --rc genhtml_branch_coverage=1 00:45:32.380 --rc genhtml_function_coverage=1 00:45:32.380 --rc genhtml_legend=1 00:45:32.380 --rc geninfo_all_blocks=1 00:45:32.380 --rc geninfo_unexecuted_blocks=1 00:45:32.380 00:45:32.380 ' 00:45:32.380 14:41:36 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:45:32.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:32.380 --rc genhtml_branch_coverage=1 00:45:32.380 --rc genhtml_function_coverage=1 00:45:32.380 --rc genhtml_legend=1 00:45:32.380 --rc geninfo_all_blocks=1 00:45:32.380 --rc geninfo_unexecuted_blocks=1 00:45:32.380 00:45:32.380 ' 00:45:32.380 14:41:36 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:32.380 14:41:36 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:45:32.380 14:41:36 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:32.380 14:41:36 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:32.380 14:41:36 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:32.380 14:41:36 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:32.380 14:41:36 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:32.380 14:41:36 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:32.380 14:41:36 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:45:32.380 14:41:36 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:32.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:32.380 14:41:36 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:45:32.380 14:41:36 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:45:32.380 14:41:36 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:45:32.380 14:41:36 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:45:32.380 14:41:36 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:32.380 14:41:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:32.380 14:41:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:45:32.380 14:41:36 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:45:32.380 14:41:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:45:40.524 Found 0000:31:00.0 (0x8086 - 0x159b) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:45:40.524 Found 0000:31:00.1 (0x8086 - 0x159b) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:45:40.524 Found net devices under 0000:31:00.0: cvl_0_0 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:45:40.524 Found net devices under 0000:31:00.1: cvl_0_1 00:45:40.524 14:41:43 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:40.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:40.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:45:40.525 00:45:40.525 --- 10.0.0.2 ping statistics --- 00:45:40.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:40.525 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:40.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:40.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:45:40.525 00:45:40.525 --- 10.0.0.1 ping statistics --- 00:45:40.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:40.525 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:45:40.525 14:41:43 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:43.827 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:45:43.827 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:45:43.827 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:45:43.827 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:45:43.827 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:45:43.827 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:45:43.827 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:45:43.827 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:45:43.827 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:45:43.827 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:45:43.827 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:45:43.827 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:45:43.827 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:45:43.827 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:45:43.827 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:45:43.827 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:45:43.827 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:45:43.827 14:41:47 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:43.827 14:41:47 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:45:43.827 14:41:47 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:45:43.827 14:41:47 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:43.827 14:41:47 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:45:43.827 14:41:47 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:45:43.828 14:41:47 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:45:43.828 14:41:47 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:45:43.828 14:41:47 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:45:43.828 14:41:47 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:43.828 14:41:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:43.828 14:41:47 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=2102944 00:45:43.828 14:41:47 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 2102944 00:45:43.828 14:41:47 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:45:43.828 14:41:47 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 2102944 ']' 00:45:43.828 14:41:47 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:43.828 14:41:47 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:43.828 14:41:47 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:43.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:43.828 14:41:47 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:43.828 14:41:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:43.828 [2024-10-13 14:41:47.368674] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:45:43.828 [2024-10-13 14:41:47.368720] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:43.828 [2024-10-13 14:41:47.505259] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:45:44.089 [2024-10-13 14:41:47.552326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:44.089 [2024-10-13 14:41:47.569487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:44.089 [2024-10-13 14:41:47.569518] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:44.089 [2024-10-13 14:41:47.569526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:44.089 [2024-10-13 14:41:47.569533] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:44.089 [2024-10-13 14:41:47.569538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:44.089 [2024-10-13 14:41:47.570119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:44.669 14:41:48 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:44.669 14:41:48 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:45:44.669 14:41:48 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:45:44.669 14:41:48 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:44.669 14:41:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:44.669 14:41:48 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:44.669 14:41:48 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:45:44.669 14:41:48 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:45:44.669 14:41:48 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:44.669 14:41:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:44.669 [2024-10-13 14:41:48.234124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:44.669 14:41:48 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:44.669 14:41:48 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:45:44.669 14:41:48 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:44.669 14:41:48 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:44.669 14:41:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:44.669 ************************************ 00:45:44.669 START TEST fio_dif_1_default 00:45:44.669 ************************************ 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:44.669 bdev_null0 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:44.669 [2024-10-13 14:41:48.326361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:44.669 14:41:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:44.669 { 00:45:44.669 "params": { 00:45:44.669 "name": "Nvme$subsystem", 00:45:44.669 "trtype": "$TEST_TRANSPORT", 00:45:44.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:44.669 "adrfam": "ipv4", 00:45:44.669 "trsvcid": "$NVMF_PORT", 00:45:44.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:44.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:44.669 "hdgst": ${hdgst:-false}, 00:45:44.669 "ddgst": ${ddgst:-false} 00:45:44.669 }, 00:45:44.669 "method": "bdev_nvme_attach_controller" 00:45:44.670 } 00:45:44.670 EOF 00:45:44.670 )") 00:45:44.670 14:41:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:45:44.670 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:44.670 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:44.670 14:41:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:45:44.670 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:44.670 14:41:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:45:44.670 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:44.670 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:45:44.670 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:44.670 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:44.670 14:41:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:45:44.670 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:44.670 14:41:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:45:44.670 14:41:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:45:44.670 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:45:44.670 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:44.670 14:41:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:45:44.670 14:41:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:45:44.670 14:41:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:45:44.670 "params": { 00:45:44.670 "name": "Nvme0", 00:45:44.670 "trtype": "tcp", 00:45:44.670 "traddr": "10.0.0.2", 00:45:44.670 "adrfam": "ipv4", 00:45:44.670 "trsvcid": "4420", 00:45:44.670 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:44.670 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:44.670 "hdgst": false, 00:45:44.670 "ddgst": false 00:45:44.670 }, 00:45:44.670 "method": "bdev_nvme_attach_controller" 00:45:44.670 }' 00:45:44.931 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:44.931 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:44.931 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:44.931 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:44.931 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:44.931 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:44.931 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:44.931 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:44.931 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:44.931 14:41:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:45.192 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:45.192 fio-3.35 00:45:45.192 Starting 1 thread 00:45:57.421 00:45:57.421 filename0: (groupid=0, jobs=1): err= 0: pid=2103470: Sun Oct 13 14:41:59 2024 00:45:57.421 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10021msec) 00:45:57.421 slat (nsec): min=5645, max=56199, avg=6442.36, stdev=2230.84 00:45:57.421 clat (usec): min=40846, max=43639, avg=41050.73, stdev=284.07 00:45:57.421 lat (usec): min=40855, max=43675, avg=41057.17, stdev=284.87 00:45:57.421 clat percentiles (usec): 00:45:57.421 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:45:57.421 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:45:57.421 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:45:57.421 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:45:57.421 | 99.99th=[43779] 00:45:57.421 bw ( KiB/s): min= 384, max= 416, per=99.59%, avg=388.80, stdev=11.72, samples=20 00:45:57.421 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:45:57.421 lat (msec) : 50=100.00% 00:45:57.421 cpu : usr=93.61%, sys=6.18%, ctx=14, majf=0, minf=274 00:45:57.421 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:57.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.421 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:57.421 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:57.421 00:45:57.421 Run status group 0 (all jobs): 00:45:57.421 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10021-10021msec 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:57.421 00:45:57.421 real 0m11.293s 00:45:57.421 user 0m18.293s 00:45:57.421 sys 0m1.059s 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:57.421 ************************************ 00:45:57.421 END TEST fio_dif_1_default 00:45:57.421 ************************************ 00:45:57.421 14:41:59 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:45:57.421 14:41:59 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:57.421 14:41:59 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:57.421 14:41:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:57.421 ************************************ 00:45:57.421 START TEST fio_dif_1_multi_subsystems 00:45:57.421 ************************************ 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:57.421 bdev_null0 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:57.421 [2024-10-13 14:41:59.696375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:57.421 bdev_null1 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:57.421 { 00:45:57.421 "params": { 00:45:57.421 "name": "Nvme$subsystem", 00:45:57.421 "trtype": "$TEST_TRANSPORT", 00:45:57.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:57.421 "adrfam": "ipv4", 00:45:57.421 "trsvcid": "$NVMF_PORT", 00:45:57.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:57.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:57.421 "hdgst": ${hdgst:-false}, 00:45:57.421 "ddgst": ${ddgst:-false} 00:45:57.421 }, 00:45:57.421 "method": "bdev_nvme_attach_controller" 00:45:57.421 } 00:45:57.421 EOF 00:45:57.421 )") 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:57.421 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:57.422 { 00:45:57.422 "params": { 00:45:57.422 "name": "Nvme$subsystem", 00:45:57.422 "trtype": "$TEST_TRANSPORT", 00:45:57.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:57.422 "adrfam": "ipv4", 00:45:57.422 "trsvcid": "$NVMF_PORT", 00:45:57.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:57.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:57.422 "hdgst": ${hdgst:-false}, 00:45:57.422 "ddgst": ${ddgst:-false} 00:45:57.422 }, 00:45:57.422 "method": "bdev_nvme_attach_controller" 00:45:57.422 } 00:45:57.422 EOF 00:45:57.422 )") 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:45:57.422 "params": { 00:45:57.422 "name": "Nvme0", 00:45:57.422 "trtype": "tcp", 00:45:57.422 "traddr": "10.0.0.2", 00:45:57.422 "adrfam": "ipv4", 00:45:57.422 "trsvcid": "4420", 00:45:57.422 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:57.422 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:57.422 "hdgst": false, 00:45:57.422 "ddgst": false 00:45:57.422 }, 00:45:57.422 "method": "bdev_nvme_attach_controller" 00:45:57.422 },{ 00:45:57.422 "params": { 00:45:57.422 "name": "Nvme1", 00:45:57.422 "trtype": "tcp", 00:45:57.422 "traddr": "10.0.0.2", 00:45:57.422 "adrfam": "ipv4", 00:45:57.422 "trsvcid": "4420", 00:45:57.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:57.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:57.422 "hdgst": false, 00:45:57.422 "ddgst": false 00:45:57.422 }, 00:45:57.422 "method": "bdev_nvme_attach_controller" 00:45:57.422 }' 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:57.422 14:41:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:57.422 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:57.422 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:57.422 fio-3.35 00:45:57.422 Starting 2 threads 00:46:07.416 00:46:07.416 filename0: (groupid=0, jobs=1): err= 0: pid=2105712: Sun Oct 13 14:42:10 2024 00:46:07.416 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10028msec) 00:46:07.416 slat (nsec): min=5682, max=25841, avg=6619.12, stdev=1576.13 00:46:07.416 clat (usec): min=40923, max=42436, avg=41937.86, stdev=215.83 00:46:07.416 lat (usec): min=40932, max=42462, avg=41944.47, stdev=215.48 00:46:07.416 clat percentiles (usec): 00:46:07.416 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:46:07.416 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:46:07.416 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:46:07.416 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:46:07.416 | 99.99th=[42206] 00:46:07.416 bw ( KiB/s): min= 352, max= 384, per=49.31%, avg=380.80, stdev= 9.85, samples=20 00:46:07.416 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:46:07.416 lat (msec) : 50=100.00% 00:46:07.416 cpu : usr=95.47%, sys=4.28%, ctx=28, majf=0, minf=187 00:46:07.416 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:07.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.416 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.416 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:07.416 filename1: (groupid=0, jobs=1): err= 0: pid=2105713: Sun Oct 13 14:42:10 2024 00:46:07.417 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10021msec) 00:46:07.417 slat (nsec): min=5638, max=26842, avg=6851.92, stdev=1505.63 00:46:07.417 clat (usec): min=40813, max=42550, avg=41049.25, stdev=259.07 00:46:07.417 lat (usec): min=40821, max=42577, avg=41056.11, stdev=259.18 00:46:07.417 clat percentiles (usec): 00:46:07.417 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:46:07.417 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:46:07.417 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:46:07.417 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:46:07.417 | 99.99th=[42730] 00:46:07.417 bw ( KiB/s): min= 384, max= 416, per=50.35%, avg=388.80, stdev=11.72, samples=20 00:46:07.417 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:46:07.417 lat (msec) : 50=100.00% 00:46:07.417 cpu : usr=95.70%, sys=4.09%, ctx=5, majf=0, minf=105 00:46:07.417 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:07.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:07.417 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:07.417 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:07.417 00:46:07.417 Run status group 0 (all jobs): 00:46:07.417 READ: bw=771KiB/s (789kB/s), 381KiB/s-390KiB/s (390kB/s-399kB/s), io=7728KiB (7913kB), run=10021-10028msec 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.417 00:46:07.417 real 0m11.396s 00:46:07.417 user 0m33.405s 00:46:07.417 sys 0m1.218s 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:07.417 14:42:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:07.417 ************************************ 00:46:07.417 END TEST fio_dif_1_multi_subsystems 00:46:07.417 ************************************ 00:46:07.417 14:42:11 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:46:07.417 14:42:11 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:46:07.417 14:42:11 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:07.417 14:42:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:07.678 ************************************ 00:46:07.678 START TEST fio_dif_rand_params 00:46:07.678 ************************************ 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:07.678 bdev_null0 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:07.678 [2024-10-13 14:42:11.177959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:46:07.678 { 00:46:07.678 "params": { 00:46:07.678 "name": "Nvme$subsystem", 00:46:07.678 "trtype": "$TEST_TRANSPORT", 00:46:07.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:07.678 "adrfam": "ipv4", 00:46:07.678 "trsvcid": "$NVMF_PORT", 00:46:07.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:07.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:07.678 "hdgst": ${hdgst:-false}, 00:46:07.678 "ddgst": ${ddgst:-false} 00:46:07.678 }, 00:46:07.678 "method": "bdev_nvme_attach_controller" 00:46:07.678 } 00:46:07.678 EOF 00:46:07.678 )") 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:46:07.678 "params": { 00:46:07.678 "name": "Nvme0", 00:46:07.678 "trtype": "tcp", 00:46:07.678 "traddr": "10.0.0.2", 00:46:07.678 "adrfam": "ipv4", 00:46:07.678 "trsvcid": "4420", 00:46:07.678 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:07.678 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:07.678 "hdgst": false, 00:46:07.678 "ddgst": false 00:46:07.678 }, 00:46:07.678 "method": "bdev_nvme_attach_controller" 00:46:07.678 }' 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:07.678 14:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:08.247 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:46:08.247 ... 00:46:08.247 fio-3.35 00:46:08.247 Starting 3 threads 00:46:14.825 00:46:14.825 filename0: (groupid=0, jobs=1): err= 0: pid=2108455: Sun Oct 13 14:42:17 2024 00:46:14.825 read: IOPS=371, BW=46.5MiB/s (48.7MB/s)(235MiB/5046msec) 00:46:14.825 slat (nsec): min=5696, max=31700, avg=9276.91, stdev=2308.88 00:46:14.825 clat (usec): min=3986, max=88285, avg=8034.47, stdev=6469.02 00:46:14.825 lat (usec): min=3996, max=88295, avg=8043.75, stdev=6469.03 00:46:14.825 clat percentiles (usec): 00:46:14.825 | 1.00th=[ 4686], 5.00th=[ 5276], 10.00th=[ 5669], 20.00th=[ 6063], 00:46:14.825 | 30.00th=[ 6521], 40.00th=[ 6849], 50.00th=[ 7111], 60.00th=[ 7439], 00:46:14.825 | 70.00th=[ 7701], 80.00th=[ 8029], 90.00th=[ 8455], 95.00th=[ 8979], 00:46:14.825 | 99.00th=[47449], 99.50th=[47973], 99.90th=[49021], 99.95th=[88605], 00:46:14.825 | 99.99th=[88605] 00:46:14.825 bw ( KiB/s): min=34048, max=59136, per=39.34%, avg=47974.40, stdev=8004.25, samples=10 00:46:14.825 iops : min= 266, max= 462, avg=374.80, stdev=62.53, samples=10 00:46:14.825 lat (msec) : 4=0.05%, 10=97.28%, 20=0.21%, 50=2.40%, 100=0.05% 00:46:14.825 cpu : usr=91.48%, sys=6.88%, ctx=411, majf=0, minf=139 00:46:14.825 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:14.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:14.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:14.825 issued rwts: total=1876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:14.825 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:14.825 filename0: (groupid=0, jobs=1): err= 0: pid=2108457: Sun Oct 13 14:42:17 2024 00:46:14.825 read: IOPS=275, BW=34.4MiB/s (36.1MB/s)(174MiB/5045msec) 00:46:14.825 slat (nsec): min=5666, max=31393, avg=8463.38, stdev=1693.75 00:46:14.825 clat (usec): min=4509, max=91399, avg=10853.72, stdev=8931.86 00:46:14.825 lat (usec): min=4518, max=91408, avg=10862.19, stdev=8931.96 00:46:14.825 clat percentiles (usec): 00:46:14.825 | 1.00th=[ 5014], 5.00th=[ 6194], 10.00th=[ 6652], 20.00th=[ 7635], 00:46:14.825 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9765], 00:46:14.825 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11338], 95.00th=[12518], 00:46:14.825 | 99.00th=[49546], 99.50th=[50594], 99.90th=[90702], 99.95th=[91751], 00:46:14.825 | 99.99th=[91751] 00:46:14.825 bw ( KiB/s): min=18688, max=43776, per=29.12%, avg=35507.20, stdev=7599.03, samples=10 00:46:14.825 iops : min= 146, max= 342, avg=277.40, stdev=59.37, samples=10 00:46:14.825 lat (msec) : 10=65.66%, 20=29.88%, 50=3.82%, 100=0.65% 00:46:14.825 cpu : usr=94.35%, sys=5.39%, ctx=6, majf=0, minf=64 00:46:14.825 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:14.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:14.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:14.825 issued rwts: total=1389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:14.825 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:14.825 filename0: (groupid=0, jobs=1): err= 0: pid=2108458: Sun Oct 13 14:42:17 2024 00:46:14.825 read: IOPS=305, BW=38.2MiB/s (40.1MB/s)(193MiB/5046msec) 00:46:14.825 slat (nsec): min=5658, max=32353, avg=8817.79, stdev=1880.62 00:46:14.825 clat (usec): min=4794, max=51112, avg=9777.74, stdev=6458.07 00:46:14.825 lat (usec): min=4803, max=51118, avg=9786.56, stdev=6457.93 00:46:14.825 clat percentiles (usec): 00:46:14.825 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 6849], 20.00th=[ 7635], 00:46:14.825 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9241], 00:46:14.825 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10552], 95.00th=[11207], 00:46:14.825 | 99.00th=[48497], 99.50th=[49546], 99.90th=[50594], 99.95th=[51119], 00:46:14.825 | 99.99th=[51119] 00:46:14.825 bw ( KiB/s): min=31488, max=46336, per=32.33%, avg=39424.00, stdev=4429.12, samples=10 00:46:14.825 iops : min= 246, max= 362, avg=308.00, stdev=34.60, samples=10 00:46:14.825 lat (msec) : 10=80.35%, 20=16.99%, 50=2.46%, 100=0.19% 00:46:14.825 cpu : usr=92.35%, sys=6.42%, ctx=235, majf=0, minf=90 00:46:14.825 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:14.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:14.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:14.825 issued rwts: total=1542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:14.825 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:14.825 00:46:14.825 Run status group 0 (all jobs): 00:46:14.825 READ: bw=119MiB/s (125MB/s), 34.4MiB/s-46.5MiB/s (36.1MB/s-48.7MB/s), io=601MiB (630MB), run=5045-5046msec 00:46:14.825 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:14.826 bdev_null0 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:14.826 [2024-10-13 14:42:17.527866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:14.826 bdev_null1 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:14.826 bdev_null2 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:46:14.826 { 00:46:14.826 "params": { 00:46:14.826 "name": "Nvme$subsystem", 00:46:14.826 "trtype": "$TEST_TRANSPORT", 00:46:14.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:14.826 "adrfam": "ipv4", 00:46:14.826 "trsvcid": "$NVMF_PORT", 00:46:14.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:14.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:14.826 "hdgst": ${hdgst:-false}, 00:46:14.826 "ddgst": ${ddgst:-false} 00:46:14.826 }, 00:46:14.826 "method": "bdev_nvme_attach_controller" 00:46:14.826 } 00:46:14.826 EOF 00:46:14.826 )") 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:46:14.826 14:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:46:14.827 { 00:46:14.827 "params": { 00:46:14.827 "name": "Nvme$subsystem", 00:46:14.827 "trtype": "$TEST_TRANSPORT", 00:46:14.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:14.827 "adrfam": "ipv4", 00:46:14.827 "trsvcid": "$NVMF_PORT", 00:46:14.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:14.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:14.827 "hdgst": ${hdgst:-false}, 00:46:14.827 "ddgst": ${ddgst:-false} 00:46:14.827 }, 00:46:14.827 "method": "bdev_nvme_attach_controller" 00:46:14.827 } 00:46:14.827 EOF 00:46:14.827 )") 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:46:14.827 { 00:46:14.827 "params": { 00:46:14.827 "name": "Nvme$subsystem", 00:46:14.827 "trtype": "$TEST_TRANSPORT", 00:46:14.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:14.827 "adrfam": "ipv4", 00:46:14.827 "trsvcid": "$NVMF_PORT", 00:46:14.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:14.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:14.827 "hdgst": ${hdgst:-false}, 00:46:14.827 "ddgst": ${ddgst:-false} 00:46:14.827 }, 00:46:14.827 "method": "bdev_nvme_attach_controller" 00:46:14.827 } 00:46:14.827 EOF 00:46:14.827 )") 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:46:14.827 "params": { 00:46:14.827 "name": "Nvme0", 00:46:14.827 "trtype": "tcp", 00:46:14.827 "traddr": "10.0.0.2", 00:46:14.827 "adrfam": "ipv4", 00:46:14.827 "trsvcid": "4420", 00:46:14.827 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:14.827 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:14.827 "hdgst": false, 00:46:14.827 "ddgst": false 00:46:14.827 }, 00:46:14.827 "method": "bdev_nvme_attach_controller" 00:46:14.827 },{ 00:46:14.827 "params": { 00:46:14.827 "name": "Nvme1", 00:46:14.827 "trtype": "tcp", 00:46:14.827 "traddr": "10.0.0.2", 00:46:14.827 "adrfam": "ipv4", 00:46:14.827 "trsvcid": "4420", 00:46:14.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:14.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:14.827 "hdgst": false, 00:46:14.827 "ddgst": false 00:46:14.827 }, 00:46:14.827 "method": "bdev_nvme_attach_controller" 00:46:14.827 },{ 00:46:14.827 "params": { 00:46:14.827 "name": "Nvme2", 00:46:14.827 "trtype": "tcp", 00:46:14.827 "traddr": "10.0.0.2", 00:46:14.827 "adrfam": "ipv4", 00:46:14.827 "trsvcid": "4420", 00:46:14.827 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:46:14.827 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:46:14.827 "hdgst": false, 00:46:14.827 "ddgst": false 00:46:14.827 }, 00:46:14.827 "method": "bdev_nvme_attach_controller" 00:46:14.827 }' 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:14.827 14:42:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:14.827 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:14.827 ... 00:46:14.827 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:14.827 ... 00:46:14.827 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:14.827 ... 00:46:14.827 fio-3.35 00:46:14.827 Starting 24 threads 00:46:27.053 00:46:27.053 filename0: (groupid=0, jobs=1): err= 0: pid=2109941: Sun Oct 13 14:42:29 2024 00:46:27.053 read: IOPS=623, BW=2493KiB/s (2553kB/s)(24.4MiB/10003msec) 00:46:27.053 slat (nsec): min=5811, max=74158, avg=9769.17, stdev=6622.49 00:46:27.053 clat (usec): min=702, max=341579, avg=25609.18, stdev=24022.64 00:46:27.053 lat (usec): min=714, max=341588, avg=25618.95, stdev=24022.50 00:46:27.053 clat percentiles (usec): 00:46:27.053 | 1.00th=[ 1434], 5.00th=[ 14615], 10.00th=[ 17171], 20.00th=[ 20841], 00:46:27.053 | 30.00th=[ 23462], 40.00th=[ 23725], 50.00th=[ 23725], 60.00th=[ 23987], 00:46:27.053 | 70.00th=[ 24249], 80.00th=[ 24511], 90.00th=[ 25035], 95.00th=[ 28443], 00:46:27.054 | 99.00th=[212861], 99.50th=[227541], 99.90th=[250610], 99.95th=[250610], 00:46:27.054 | 99.99th=[341836] 00:46:27.054 bw ( KiB/s): min= 256, max= 2968, per=4.40%, avg=2483.79, stdev=724.11, samples=19 00:46:27.054 iops : min= 64, max= 742, avg=620.95, stdev=181.03, samples=19 00:46:27.054 lat (usec) : 750=0.03%, 1000=0.05% 00:46:27.054 lat (msec) : 2=2.17%, 4=0.29%, 10=0.29%, 20=15.24%, 50=79.95% 00:46:27.054 lat (msec) : 100=0.35%, 250=1.52%, 500=0.11% 00:46:27.054 cpu : usr=98.82%, sys=0.86%, ctx=70, majf=0, minf=41 00:46:27.054 IO depths : 1=2.1%, 2=4.3%, 4=10.9%, 8=70.5%, 16=12.2%, 32=0.0%, >=64=0.0% 00:46:27.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.054 complete : 0=0.0%, 4=90.8%, 8=5.2%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.054 issued rwts: total=6235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.054 filename0: (groupid=0, jobs=1): err= 0: pid=2109942: Sun Oct 13 14:42:29 2024 00:46:27.054 read: IOPS=581, BW=2326KiB/s (2381kB/s)(22.8MiB/10017msec) 00:46:27.054 slat (usec): min=5, max=109, avg=27.83, stdev=17.70 00:46:27.054 clat (msec): min=13, max=303, avg=27.26, stdev=25.46 00:46:27.054 lat (msec): min=13, max=303, avg=27.29, stdev=25.46 00:46:27.054 clat percentiles (msec): 00:46:27.054 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:46:27.054 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:46:27.054 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:46:27.054 | 99.00th=[ 209], 99.50th=[ 253], 99.90th=[ 262], 99.95th=[ 262], 00:46:27.054 | 99.99th=[ 305] 00:46:27.054 bw ( KiB/s): min= 256, max= 2688, per=4.12%, avg=2323.20, stdev=809.84, samples=20 00:46:27.054 iops : min= 64, max= 672, avg=580.80, stdev=202.46, samples=20 00:46:27.054 lat (msec) : 20=0.34%, 50=97.77%, 100=0.24%, 250=1.10%, 500=0.55% 00:46:27.054 cpu : usr=98.98%, sys=0.71%, ctx=33, majf=0, minf=27 00:46:27.054 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:27.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.054 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.054 issued rwts: total=5824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.054 filename0: (groupid=0, jobs=1): err= 0: pid=2109943: Sun Oct 13 14:42:29 2024 00:46:27.054 read: IOPS=585, BW=2340KiB/s (2397kB/s)(22.9MiB/10019msec) 00:46:27.054 slat (usec): min=5, max=124, avg=23.12, stdev=17.12 00:46:27.054 clat (msec): min=11, max=268, avg=27.15, stdev=24.83 00:46:27.054 lat (msec): min=11, max=268, avg=27.18, stdev=24.82 00:46:27.054 clat percentiles (msec): 00:46:27.054 | 1.00th=[ 19], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 24], 00:46:27.054 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:46:27.054 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:46:27.054 | 99.00th=[ 199], 99.50th=[ 253], 99.90th=[ 271], 99.95th=[ 271], 00:46:27.054 | 99.99th=[ 271] 00:46:27.054 bw ( KiB/s): min= 256, max= 2688, per=4.14%, avg=2338.40, stdev=790.93, samples=20 00:46:27.054 iops : min= 64, max= 672, avg=584.60, stdev=197.73, samples=20 00:46:27.054 lat (msec) : 20=1.93%, 50=96.06%, 100=0.10%, 250=1.30%, 500=0.61% 00:46:27.054 cpu : usr=98.61%, sys=0.89%, ctx=91, majf=0, minf=19 00:46:27.054 IO depths : 1=5.8%, 2=11.8%, 4=24.3%, 8=51.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:46:27.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.054 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.054 issued rwts: total=5862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.054 filename0: (groupid=0, jobs=1): err= 0: pid=2109944: Sun Oct 13 14:42:29 2024 00:46:27.054 read: IOPS=580, BW=2322KiB/s (2378kB/s)(22.7MiB/10015msec) 00:46:27.054 slat (nsec): min=5836, max=79296, avg=10447.24, stdev=6158.49 00:46:27.054 clat (msec): min=12, max=371, avg=27.47, stdev=29.27 00:46:27.054 lat (msec): min=12, max=371, avg=27.48, stdev=29.27 00:46:27.054 clat percentiles (msec): 00:46:27.054 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:46:27.054 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 25], 00:46:27.054 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:46:27.054 | 99.00th=[ 222], 99.50th=[ 309], 99.90th=[ 326], 99.95th=[ 326], 00:46:27.054 | 99.99th=[ 372] 00:46:27.054 bw ( KiB/s): min= 128, max= 2688, per=4.11%, avg=2319.20, stdev=823.07, samples=20 00:46:27.054 iops : min= 32, max= 672, avg=579.80, stdev=205.77, samples=20 00:46:27.054 lat (msec) : 20=0.38%, 50=98.14%, 250=0.52%, 500=0.96% 00:46:27.054 cpu : usr=98.96%, sys=0.75%, ctx=13, majf=0, minf=21 00:46:27.054 IO depths : 1=6.2%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:27.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.054 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.054 issued rwts: total=5814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.054 filename0: (groupid=0, jobs=1): err= 0: pid=2109945: Sun Oct 13 14:42:29 2024 00:46:27.054 read: IOPS=584, BW=2338KiB/s (2394kB/s)(22.9MiB/10015msec) 00:46:27.054 slat (usec): min=5, max=119, avg=26.75, stdev=17.01 00:46:27.054 clat (msec): min=9, max=372, avg=27.15, stdev=30.47 00:46:27.054 lat (msec): min=9, max=372, avg=27.18, stdev=30.47 00:46:27.054 clat percentiles (msec): 00:46:27.054 | 1.00th=[ 15], 5.00th=[ 20], 10.00th=[ 23], 20.00th=[ 24], 00:46:27.054 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:46:27.054 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 28], 00:46:27.054 | 99.00th=[ 253], 99.50th=[ 313], 99.90th=[ 372], 99.95th=[ 372], 00:46:27.054 | 99.99th=[ 372] 00:46:27.054 bw ( KiB/s): min= 128, max= 2912, per=4.14%, avg=2335.20, stdev=851.49, samples=20 00:46:27.054 iops : min= 32, max= 728, avg=583.80, stdev=212.87, samples=20 00:46:27.054 lat (msec) : 10=0.07%, 20=6.44%, 50=92.13%, 250=0.31%, 500=1.06% 00:46:27.054 cpu : usr=98.73%, sys=0.92%, ctx=63, majf=0, minf=17 00:46:27.054 IO depths : 1=3.9%, 2=8.6%, 4=21.1%, 8=57.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:46:27.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.054 complete : 0=0.0%, 4=93.3%, 8=1.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.054 issued rwts: total=5854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.054 filename0: (groupid=0, jobs=1): err= 0: pid=2109946: Sun Oct 13 14:42:29 2024 00:46:27.054 read: IOPS=598, BW=2393KiB/s (2451kB/s)(23.4MiB/10002msec) 00:46:27.054 slat (usec): min=5, max=109, avg=24.18, stdev=19.87 00:46:27.054 clat (msec): min=8, max=596, avg=26.55, stdev=36.48 00:46:27.054 lat (msec): min=8, max=596, avg=26.58, stdev=36.48 00:46:27.054 clat percentiles (msec): 00:46:27.054 | 1.00th=[ 14], 5.00th=[ 17], 10.00th=[ 19], 20.00th=[ 24], 00:46:27.054 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:46:27.054 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 28], 00:46:27.054 | 99.00th=[ 192], 99.50th=[ 317], 99.90th=[ 600], 99.95th=[ 600], 00:46:27.054 | 99.99th=[ 600] 00:46:27.054 bw ( KiB/s): min= 256, max= 3104, per=4.43%, avg=2503.11, stdev=717.58, samples=18 00:46:27.054 iops : min= 64, max= 776, avg=625.78, stdev=179.40, samples=18 00:46:27.054 lat (msec) : 10=0.20%, 20=12.65%, 50=86.08%, 250=0.53%, 500=0.27% 00:46:27.054 lat (msec) : 750=0.27% 00:46:27.054 cpu : usr=98.81%, sys=0.89%, ctx=39, majf=0, minf=14 00:46:27.054 IO depths : 1=2.4%, 2=5.4%, 4=16.1%, 8=65.1%, 16=11.0%, 32=0.0%, >=64=0.0% 00:46:27.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.054 complete : 0=0.0%, 4=92.2%, 8=3.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.054 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.054 filename0: (groupid=0, jobs=1): err= 0: pid=2109947: Sun Oct 13 14:42:29 2024 00:46:27.054 read: IOPS=580, BW=2323KiB/s (2378kB/s)(22.7MiB/10002msec) 00:46:27.054 slat (usec): min=5, max=115, avg=20.00, stdev=15.50 00:46:27.054 clat (msec): min=12, max=462, avg=27.39, stdev=30.14 00:46:27.054 lat (msec): min=12, max=462, avg=27.41, stdev=30.14 00:46:27.054 clat percentiles (msec): 00:46:27.054 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:46:27.054 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:46:27.054 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:46:27.054 | 99.00th=[ 249], 99.50th=[ 317], 99.90th=[ 347], 99.95th=[ 347], 00:46:27.054 | 99.99th=[ 464] 00:46:27.054 bw ( KiB/s): min= 128, max= 2688, per=4.07%, avg=2297.26, stdev=845.96, samples=19 00:46:27.054 iops : min= 32, max= 672, avg=574.32, stdev=211.49, samples=19 00:46:27.054 lat (msec) : 20=0.31%, 50=98.31%, 100=0.03%, 250=0.55%, 500=0.79% 00:46:27.054 cpu : usr=98.99%, sys=0.71%, ctx=27, majf=0, minf=23 00:46:27.054 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:27.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.054 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.054 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.054 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.054 filename0: (groupid=0, jobs=1): err= 0: pid=2109948: Sun Oct 13 14:42:29 2024 00:46:27.054 read: IOPS=582, BW=2329KiB/s (2385kB/s)(22.8MiB/10004msec) 00:46:27.054 slat (usec): min=5, max=105, avg=28.22, stdev=15.36 00:46:27.054 clat (msec): min=4, max=692, avg=27.25, stdev=37.80 00:46:27.054 lat (msec): min=4, max=692, avg=27.28, stdev=37.80 00:46:27.054 clat percentiles (msec): 00:46:27.054 | 1.00th=[ 19], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:46:27.054 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:46:27.054 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:46:27.054 | 99.00th=[ 86], 99.50th=[ 321], 99.90th=[ 600], 99.95th=[ 600], 00:46:27.054 | 99.99th=[ 693] 00:46:27.054 bw ( KiB/s): min= 240, max= 2704, per=4.29%, avg=2424.89, stdev=684.57, samples=18 00:46:27.054 iops : min= 60, max= 676, avg=606.22, stdev=171.14, samples=18 00:46:27.054 lat (msec) : 10=0.55%, 20=0.91%, 50=97.44%, 100=0.17%, 250=0.24% 00:46:27.055 lat (msec) : 500=0.41%, 750=0.27% 00:46:27.055 cpu : usr=98.81%, sys=0.83%, ctx=43, majf=0, minf=25 00:46:27.055 IO depths : 1=3.6%, 2=9.8%, 4=24.8%, 8=53.0%, 16=8.9%, 32=0.0%, >=64=0.0% 00:46:27.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.055 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.055 issued rwts: total=5824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.055 filename1: (groupid=0, jobs=1): err= 0: pid=2109949: Sun Oct 13 14:42:29 2024 00:46:27.055 read: IOPS=581, BW=2325KiB/s (2381kB/s)(22.8MiB/10018msec) 00:46:27.055 slat (usec): min=5, max=103, avg=27.59, stdev=16.43 00:46:27.055 clat (msec): min=16, max=317, avg=27.29, stdev=25.65 00:46:27.055 lat (msec): min=16, max=317, avg=27.32, stdev=25.65 00:46:27.055 clat percentiles (msec): 00:46:27.055 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:46:27.055 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:46:27.055 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:46:27.055 | 99.00th=[ 222], 99.50th=[ 253], 99.90th=[ 262], 99.95th=[ 262], 00:46:27.055 | 99.99th=[ 317] 00:46:27.055 bw ( KiB/s): min= 256, max= 2688, per=4.12%, avg=2323.20, stdev=809.84, samples=20 00:46:27.055 iops : min= 64, max= 672, avg=580.80, stdev=202.46, samples=20 00:46:27.055 lat (msec) : 20=0.03%, 50=98.08%, 100=0.24%, 250=1.13%, 500=0.52% 00:46:27.055 cpu : usr=98.80%, sys=0.91%, ctx=68, majf=0, minf=22 00:46:27.055 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:46:27.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.055 issued rwts: total=5824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.055 filename1: (groupid=0, jobs=1): err= 0: pid=2109950: Sun Oct 13 14:42:29 2024 00:46:27.055 read: IOPS=583, BW=2336KiB/s (2392kB/s)(22.8MiB/10004msec) 00:46:27.055 slat (usec): min=5, max=103, avg=26.41, stdev=18.27 00:46:27.055 clat (msec): min=4, max=596, avg=27.19, stdev=37.96 00:46:27.055 lat (msec): min=4, max=596, avg=27.21, stdev=37.96 00:46:27.055 clat percentiles (msec): 00:46:27.055 | 1.00th=[ 14], 5.00th=[ 21], 10.00th=[ 23], 20.00th=[ 24], 00:46:27.055 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:46:27.055 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 27], 00:46:27.055 | 99.00th=[ 85], 99.50th=[ 355], 99.90th=[ 600], 99.95th=[ 600], 00:46:27.055 | 99.99th=[ 600] 00:46:27.055 bw ( KiB/s): min= 256, max= 2832, per=4.31%, avg=2433.78, stdev=687.11, samples=18 00:46:27.055 iops : min= 64, max= 708, avg=608.44, stdev=171.78, samples=18 00:46:27.055 lat (msec) : 10=0.45%, 20=4.43%, 50=94.03%, 100=0.27%, 500=0.55% 00:46:27.055 lat (msec) : 750=0.27% 00:46:27.055 cpu : usr=98.76%, sys=0.87%, ctx=53, majf=0, minf=19 00:46:27.055 IO depths : 1=2.1%, 2=5.3%, 4=18.2%, 8=63.0%, 16=11.5%, 32=0.0%, >=64=0.0% 00:46:27.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.055 complete : 0=0.0%, 4=93.2%, 8=2.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.055 issued rwts: total=5842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.055 filename1: (groupid=0, jobs=1): err= 0: pid=2109951: Sun Oct 13 14:42:29 2024 00:46:27.055 read: IOPS=582, BW=2329KiB/s (2385kB/s)(22.8MiB/10004msec) 00:46:27.055 slat (usec): min=5, max=110, avg=29.19, stdev=15.17 00:46:27.055 clat (msec): min=4, max=595, avg=27.22, stdev=36.83 00:46:27.055 lat (msec): min=4, max=595, avg=27.25, stdev=36.83 00:46:27.055 clat percentiles (msec): 00:46:27.055 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:46:27.055 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:46:27.055 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:46:27.055 | 99.00th=[ 192], 99.50th=[ 321], 99.90th=[ 600], 99.95th=[ 600], 00:46:27.055 | 99.99th=[ 600] 00:46:27.055 bw ( KiB/s): min= 256, max= 2688, per=4.29%, avg=2424.89, stdev=683.65, samples=18 00:46:27.055 iops : min= 64, max= 672, avg=606.22, stdev=170.91, samples=18 00:46:27.055 lat (msec) : 10=0.55%, 20=0.34%, 50=98.01%, 100=0.03%, 250=0.48% 00:46:27.055 lat (msec) : 500=0.31%, 750=0.27% 00:46:27.055 cpu : usr=98.88%, sys=0.71%, ctx=116, majf=0, minf=29 00:46:27.055 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:27.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.055 issued rwts: total=5824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.055 filename1: (groupid=0, jobs=1): err= 0: pid=2109952: Sun Oct 13 14:42:29 2024 00:46:27.055 read: IOPS=584, BW=2337KiB/s (2394kB/s)(22.9MiB/10021msec) 00:46:27.055 slat (nsec): min=5811, max=42333, avg=8349.12, stdev=4213.52 00:46:27.055 clat (msec): min=12, max=302, avg=27.31, stdev=24.02 00:46:27.055 lat (msec): min=12, max=302, avg=27.31, stdev=24.02 00:46:27.055 clat percentiles (msec): 00:46:27.055 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:46:27.055 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 25], 00:46:27.055 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 26], 00:46:27.055 | 99.00th=[ 190], 99.50th=[ 239], 99.90th=[ 249], 99.95th=[ 249], 00:46:27.055 | 99.99th=[ 305] 00:46:27.055 bw ( KiB/s): min= 368, max= 2688, per=4.14%, avg=2336.00, stdev=791.52, samples=20 00:46:27.055 iops : min= 92, max= 672, avg=584.00, stdev=197.88, samples=20 00:46:27.055 lat (msec) : 20=0.27%, 50=97.54%, 100=0.55%, 250=1.61%, 500=0.03% 00:46:27.055 cpu : usr=99.16%, sys=0.53%, ctx=55, majf=0, minf=30 00:46:27.055 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:46:27.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.055 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.055 issued rwts: total=5856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.055 filename1: (groupid=0, jobs=1): err= 0: pid=2109953: Sun Oct 13 14:42:29 2024 00:46:27.055 read: IOPS=598, BW=2395KiB/s (2453kB/s)(23.5MiB/10043msec) 00:46:27.055 slat (nsec): min=5804, max=71762, avg=10958.20, stdev=8652.43 00:46:27.055 clat (msec): min=7, max=587, avg=26.63, stdev=32.25 00:46:27.055 lat (msec): min=7, max=587, avg=26.65, stdev=32.25 00:46:27.055 clat percentiles (msec): 00:46:27.055 | 1.00th=[ 13], 5.00th=[ 16], 10.00th=[ 19], 20.00th=[ 22], 00:46:27.055 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:46:27.055 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 27], 95.00th=[ 32], 00:46:27.055 | 99.00th=[ 192], 99.50th=[ 234], 99.90th=[ 510], 99.95th=[ 510], 00:46:27.055 | 99.99th=[ 584] 00:46:27.055 bw ( KiB/s): min= 368, max= 2912, per=4.48%, avg=2527.16, stdev=672.22, samples=19 00:46:27.055 iops : min= 92, max= 728, avg=631.79, stdev=168.06, samples=19 00:46:27.055 lat (msec) : 10=0.37%, 20=14.93%, 50=83.37%, 100=0.03%, 250=1.03% 00:46:27.055 lat (msec) : 750=0.27% 00:46:27.055 cpu : usr=97.69%, sys=1.53%, ctx=689, majf=0, minf=26 00:46:27.055 IO depths : 1=0.5%, 2=1.1%, 4=4.7%, 8=78.4%, 16=15.3%, 32=0.0%, >=64=0.0% 00:46:27.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.055 complete : 0=0.0%, 4=89.5%, 8=8.0%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.055 issued rwts: total=6014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.055 filename1: (groupid=0, jobs=1): err= 0: pid=2109954: Sun Oct 13 14:42:29 2024 00:46:27.055 read: IOPS=579, BW=2319KiB/s (2374kB/s)(22.7MiB/10020msec) 00:46:27.055 slat (usec): min=3, max=104, avg=29.55, stdev=17.17 00:46:27.055 clat (msec): min=16, max=325, avg=27.35, stdev=28.28 00:46:27.055 lat (msec): min=16, max=325, avg=27.38, stdev=28.28 00:46:27.055 clat percentiles (msec): 00:46:27.055 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:46:27.055 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:46:27.055 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:46:27.055 | 99.00th=[ 207], 99.50th=[ 313], 99.90th=[ 326], 99.95th=[ 326], 00:46:27.055 | 99.99th=[ 326] 00:46:27.055 bw ( KiB/s): min= 256, max= 2688, per=4.10%, avg=2316.80, stdev=822.12, samples=20 00:46:27.055 iops : min= 64, max= 672, avg=579.20, stdev=205.53, samples=20 00:46:27.055 lat (msec) : 20=0.38%, 50=97.97%, 250=0.83%, 500=0.83% 00:46:27.055 cpu : usr=99.00%, sys=0.71%, ctx=7, majf=0, minf=20 00:46:27.055 IO depths : 1=5.8%, 2=11.9%, 4=24.7%, 8=50.9%, 16=6.7%, 32=0.0%, >=64=0.0% 00:46:27.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.055 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.055 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.055 filename1: (groupid=0, jobs=1): err= 0: pid=2109955: Sun Oct 13 14:42:29 2024 00:46:27.055 read: IOPS=586, BW=2345KiB/s (2401kB/s)(22.9MiB/10019msec) 00:46:27.055 slat (nsec): min=5819, max=94843, avg=17530.11, stdev=13448.29 00:46:27.055 clat (msec): min=10, max=344, avg=27.15, stdev=25.92 00:46:27.055 lat (msec): min=10, max=344, avg=27.17, stdev=25.91 00:46:27.055 clat percentiles (msec): 00:46:27.055 | 1.00th=[ 16], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 24], 00:46:27.055 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:46:27.055 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:46:27.055 | 99.00th=[ 213], 99.50th=[ 230], 99.90th=[ 342], 99.95th=[ 347], 00:46:27.055 | 99.99th=[ 347] 00:46:27.055 bw ( KiB/s): min= 224, max= 2688, per=4.15%, avg=2343.20, stdev=792.39, samples=20 00:46:27.055 iops : min= 56, max= 672, avg=585.80, stdev=198.10, samples=20 00:46:27.055 lat (msec) : 20=3.23%, 50=94.82%, 100=0.37%, 250=1.09%, 500=0.48% 00:46:27.055 cpu : usr=98.70%, sys=0.87%, ctx=116, majf=0, minf=23 00:46:27.055 IO depths : 1=5.4%, 2=11.2%, 4=23.3%, 8=52.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:46:27.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.055 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.055 issued rwts: total=5874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.055 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.056 filename1: (groupid=0, jobs=1): err= 0: pid=2109956: Sun Oct 13 14:42:29 2024 00:46:27.056 read: IOPS=587, BW=2350KiB/s (2406kB/s)(23.0MiB/10009msec) 00:46:27.056 slat (nsec): min=5802, max=83915, avg=20831.38, stdev=13587.49 00:46:27.056 clat (msec): min=9, max=472, avg=27.06, stdev=30.31 00:46:27.056 lat (msec): min=9, max=472, avg=27.08, stdev=30.31 00:46:27.056 clat percentiles (msec): 00:46:27.056 | 1.00th=[ 15], 5.00th=[ 18], 10.00th=[ 22], 20.00th=[ 24], 00:46:27.056 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:46:27.056 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 29], 00:46:27.056 | 99.00th=[ 249], 99.50th=[ 317], 99.90th=[ 355], 99.95th=[ 355], 00:46:27.056 | 99.99th=[ 472] 00:46:27.056 bw ( KiB/s): min= 128, max= 2816, per=4.15%, avg=2345.60, stdev=841.33, samples=20 00:46:27.056 iops : min= 32, max= 704, avg=586.40, stdev=210.33, samples=20 00:46:27.056 lat (msec) : 10=0.03%, 20=6.82%, 50=91.79%, 100=0.03%, 250=0.54% 00:46:27.056 lat (msec) : 500=0.78% 00:46:27.056 cpu : usr=98.94%, sys=0.74%, ctx=69, majf=0, minf=22 00:46:27.056 IO depths : 1=3.8%, 2=7.7%, 4=16.9%, 8=61.9%, 16=9.7%, 32=0.0%, >=64=0.0% 00:46:27.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.056 complete : 0=0.0%, 4=92.0%, 8=3.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.056 issued rwts: total=5880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.056 filename2: (groupid=0, jobs=1): err= 0: pid=2109957: Sun Oct 13 14:42:29 2024 00:46:27.056 read: IOPS=584, BW=2338KiB/s (2394kB/s)(22.9MiB/10016msec) 00:46:27.056 slat (nsec): min=5581, max=91769, avg=12823.08, stdev=10881.42 00:46:27.056 clat (msec): min=7, max=307, avg=27.26, stdev=26.99 00:46:27.056 lat (msec): min=7, max=307, avg=27.27, stdev=26.99 00:46:27.056 clat percentiles (msec): 00:46:27.056 | 1.00th=[ 16], 5.00th=[ 20], 10.00th=[ 23], 20.00th=[ 24], 00:46:27.056 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 25], 00:46:27.056 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 29], 00:46:27.056 | 99.00th=[ 222], 99.50th=[ 249], 99.90th=[ 309], 99.95th=[ 309], 00:46:27.056 | 99.99th=[ 309] 00:46:27.056 bw ( KiB/s): min= 128, max= 2840, per=4.14%, avg=2335.60, stdev=832.18, samples=20 00:46:27.056 iops : min= 32, max= 710, avg=583.90, stdev=208.05, samples=20 00:46:27.056 lat (msec) : 10=0.32%, 20=5.74%, 50=92.30%, 250=1.37%, 500=0.27% 00:46:27.056 cpu : usr=99.03%, sys=0.68%, ctx=23, majf=0, minf=22 00:46:27.056 IO depths : 1=3.8%, 2=8.9%, 4=21.7%, 8=56.9%, 16=8.8%, 32=0.0%, >=64=0.0% 00:46:27.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.056 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.056 issued rwts: total=5855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.056 filename2: (groupid=0, jobs=1): err= 0: pid=2109958: Sun Oct 13 14:42:29 2024 00:46:27.056 read: IOPS=612, BW=2449KiB/s (2508kB/s)(24.0MiB/10019msec) 00:46:27.056 slat (nsec): min=4646, max=94959, avg=12557.45, stdev=12471.80 00:46:27.056 clat (msec): min=8, max=325, avg=26.03, stdev=27.95 00:46:27.056 lat (msec): min=8, max=325, avg=26.04, stdev=27.95 00:46:27.056 clat percentiles (msec): 00:46:27.056 | 1.00th=[ 13], 5.00th=[ 15], 10.00th=[ 17], 20.00th=[ 20], 00:46:27.056 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:46:27.056 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 31], 00:46:27.056 | 99.00th=[ 207], 99.50th=[ 313], 99.90th=[ 326], 99.95th=[ 326], 00:46:27.056 | 99.99th=[ 326] 00:46:27.056 bw ( KiB/s): min= 256, max= 3392, per=4.33%, avg=2447.60, stdev=898.77, samples=20 00:46:27.056 iops : min= 64, max= 848, avg=611.90, stdev=224.69, samples=20 00:46:27.056 lat (msec) : 10=0.24%, 20=20.72%, 50=77.47%, 250=0.78%, 500=0.78% 00:46:27.056 cpu : usr=98.90%, sys=0.74%, ctx=95, majf=0, minf=18 00:46:27.056 IO depths : 1=3.5%, 2=7.3%, 4=17.3%, 8=62.5%, 16=9.3%, 32=0.0%, >=64=0.0% 00:46:27.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.056 complete : 0=0.0%, 4=92.0%, 8=2.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.056 issued rwts: total=6135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.056 filename2: (groupid=0, jobs=1): err= 0: pid=2109959: Sun Oct 13 14:42:29 2024 00:46:27.056 read: IOPS=580, BW=2322KiB/s (2378kB/s)(22.7MiB/10005msec) 00:46:27.056 slat (usec): min=5, max=112, avg=30.62, stdev=15.66 00:46:27.056 clat (msec): min=7, max=587, avg=27.29, stdev=35.53 00:46:27.056 lat (msec): min=7, max=587, avg=27.32, stdev=35.53 00:46:27.056 clat percentiles (msec): 00:46:27.056 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:46:27.056 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:46:27.056 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:46:27.056 | 99.00th=[ 184], 99.50th=[ 380], 99.90th=[ 514], 99.95th=[ 514], 00:46:27.056 | 99.99th=[ 592] 00:46:27.056 bw ( KiB/s): min= 256, max= 2688, per=4.29%, avg=2424.89, stdev=683.65, samples=18 00:46:27.056 iops : min= 64, max= 672, avg=606.22, stdev=170.91, samples=18 00:46:27.056 lat (msec) : 10=0.28%, 20=0.31%, 50=98.31%, 250=0.28%, 500=0.55% 00:46:27.056 lat (msec) : 750=0.28% 00:46:27.056 cpu : usr=98.90%, sys=0.81%, ctx=21, majf=0, minf=19 00:46:27.056 IO depths : 1=5.5%, 2=11.7%, 4=24.9%, 8=50.9%, 16=7.0%, 32=0.0%, >=64=0.0% 00:46:27.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.056 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.056 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.056 filename2: (groupid=0, jobs=1): err= 0: pid=2109960: Sun Oct 13 14:42:29 2024 00:46:27.056 read: IOPS=582, BW=2330KiB/s (2386kB/s)(22.8MiB/10009msec) 00:46:27.056 slat (nsec): min=5276, max=75463, avg=13510.40, stdev=10543.88 00:46:27.056 clat (msec): min=13, max=373, avg=27.35, stdev=30.62 00:46:27.056 lat (msec): min=13, max=373, avg=27.36, stdev=30.62 00:46:27.056 clat percentiles (msec): 00:46:27.056 | 1.00th=[ 17], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 24], 00:46:27.056 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 25], 00:46:27.056 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:46:27.056 | 99.00th=[ 253], 99.50th=[ 313], 99.90th=[ 372], 99.95th=[ 372], 00:46:27.056 | 99.99th=[ 372] 00:46:27.056 bw ( KiB/s): min= 128, max= 2864, per=4.12%, avg=2325.60, stdev=832.43, samples=20 00:46:27.056 iops : min= 32, max= 716, avg=581.40, stdev=208.11, samples=20 00:46:27.056 lat (msec) : 20=2.16%, 50=96.47%, 250=0.31%, 500=1.06% 00:46:27.056 cpu : usr=99.13%, sys=0.59%, ctx=13, majf=0, minf=17 00:46:27.056 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:46:27.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.056 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.056 issued rwts: total=5830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.056 filename2: (groupid=0, jobs=1): err= 0: pid=2109961: Sun Oct 13 14:42:29 2024 00:46:27.056 read: IOPS=633, BW=2535KiB/s (2596kB/s)(24.8MiB/10020msec) 00:46:27.056 slat (usec): min=5, max=110, avg=10.74, stdev= 9.85 00:46:27.056 clat (msec): min=7, max=317, avg=25.17, stdev=24.53 00:46:27.056 lat (msec): min=7, max=317, avg=25.18, stdev=24.53 00:46:27.056 clat percentiles (msec): 00:46:27.056 | 1.00th=[ 9], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 20], 00:46:27.056 | 30.00th=[ 22], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:46:27.056 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 28], 00:46:27.056 | 99.00th=[ 205], 99.50th=[ 239], 99.90th=[ 249], 99.95th=[ 249], 00:46:27.056 | 99.99th=[ 317] 00:46:27.056 bw ( KiB/s): min= 256, max= 3632, per=4.49%, avg=2533.60, stdev=928.00, samples=20 00:46:27.056 iops : min= 64, max= 908, avg=633.40, stdev=232.00, samples=20 00:46:27.056 lat (msec) : 10=2.05%, 20=22.85%, 50=73.34%, 250=1.73%, 500=0.03% 00:46:27.056 cpu : usr=98.87%, sys=0.82%, ctx=65, majf=0, minf=23 00:46:27.056 IO depths : 1=2.8%, 2=5.9%, 4=15.0%, 8=66.3%, 16=10.0%, 32=0.0%, >=64=0.0% 00:46:27.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.056 complete : 0=0.0%, 4=91.3%, 8=3.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.056 issued rwts: total=6350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.056 filename2: (groupid=0, jobs=1): err= 0: pid=2109962: Sun Oct 13 14:42:29 2024 00:46:27.056 read: IOPS=583, BW=2335KiB/s (2391kB/s)(22.8MiB/10004msec) 00:46:27.056 slat (nsec): min=5776, max=61821, avg=10798.60, stdev=6627.98 00:46:27.056 clat (msec): min=4, max=596, avg=27.31, stdev=38.15 00:46:27.056 lat (msec): min=4, max=596, avg=27.32, stdev=38.15 00:46:27.056 clat percentiles (msec): 00:46:27.056 | 1.00th=[ 16], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 24], 00:46:27.056 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 25], 00:46:27.056 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:46:27.056 | 99.00th=[ 94], 99.50th=[ 355], 99.90th=[ 600], 99.95th=[ 600], 00:46:27.056 | 99.99th=[ 600] 00:46:27.056 bw ( KiB/s): min= 240, max= 2736, per=4.31%, avg=2432.89, stdev=687.25, samples=18 00:46:27.056 iops : min= 60, max= 684, avg=608.22, stdev=171.81, samples=18 00:46:27.056 lat (msec) : 10=0.51%, 20=1.95%, 50=96.44%, 100=0.27%, 250=0.03% 00:46:27.056 lat (msec) : 500=0.51%, 750=0.27% 00:46:27.056 cpu : usr=98.95%, sys=0.69%, ctx=30, majf=0, minf=21 00:46:27.056 IO depths : 1=5.4%, 2=10.9%, 4=22.2%, 8=53.8%, 16=7.7%, 32=0.0%, >=64=0.0% 00:46:27.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.056 complete : 0=0.0%, 4=93.5%, 8=1.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.056 issued rwts: total=5840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.056 filename2: (groupid=0, jobs=1): err= 0: pid=2109963: Sun Oct 13 14:42:29 2024 00:46:27.056 read: IOPS=580, BW=2322KiB/s (2377kB/s)(22.7MiB/10007msec) 00:46:27.056 slat (nsec): min=5814, max=98293, avg=30387.00, stdev=17077.16 00:46:27.056 clat (msec): min=12, max=479, avg=27.30, stdev=30.51 00:46:27.056 lat (msec): min=12, max=479, avg=27.33, stdev=30.51 00:46:27.056 clat percentiles (msec): 00:46:27.056 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:46:27.056 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:46:27.056 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 25], 95.00th=[ 26], 00:46:27.057 | 99.00th=[ 222], 99.50th=[ 347], 99.90th=[ 355], 99.95th=[ 355], 00:46:27.057 | 99.99th=[ 481] 00:46:27.057 bw ( KiB/s): min= 128, max= 2768, per=4.11%, avg=2320.80, stdev=830.09, samples=20 00:46:27.057 iops : min= 32, max= 692, avg=580.20, stdev=207.52, samples=20 00:46:27.057 lat (msec) : 20=0.41%, 50=98.21%, 100=0.03%, 250=0.79%, 500=0.55% 00:46:27.057 cpu : usr=99.10%, sys=0.60%, ctx=37, majf=0, minf=28 00:46:27.057 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:46:27.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.057 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.057 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.057 filename2: (groupid=0, jobs=1): err= 0: pid=2109964: Sun Oct 13 14:42:29 2024 00:46:27.057 read: IOPS=597, BW=2389KiB/s (2446kB/s)(23.3MiB/10004msec) 00:46:27.057 slat (usec): min=5, max=122, avg=16.38, stdev=13.90 00:46:27.057 clat (msec): min=4, max=596, avg=26.69, stdev=37.69 00:46:27.057 lat (msec): min=4, max=596, avg=26.71, stdev=37.69 00:46:27.057 clat percentiles (msec): 00:46:27.057 | 1.00th=[ 12], 5.00th=[ 16], 10.00th=[ 19], 20.00th=[ 23], 00:46:27.057 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:46:27.057 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 30], 00:46:27.057 | 99.00th=[ 86], 99.50th=[ 355], 99.90th=[ 600], 99.95th=[ 600], 00:46:27.057 | 99.99th=[ 600] 00:46:27.057 bw ( KiB/s): min= 256, max= 2960, per=4.40%, avg=2486.22, stdev=711.30, samples=18 00:46:27.057 iops : min= 64, max= 740, avg=621.56, stdev=177.83, samples=18 00:46:27.057 lat (msec) : 10=0.87%, 20=12.71%, 50=85.35%, 100=0.27%, 500=0.54% 00:46:27.057 lat (msec) : 750=0.27% 00:46:27.057 cpu : usr=98.99%, sys=0.72%, ctx=30, majf=0, minf=20 00:46:27.057 IO depths : 1=0.8%, 2=2.4%, 4=8.9%, 8=73.5%, 16=14.3%, 32=0.0%, >=64=0.0% 00:46:27.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.057 complete : 0=0.0%, 4=90.6%, 8=6.3%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:27.057 issued rwts: total=5974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:27.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:46:27.057 00:46:27.057 Run status group 0 (all jobs): 00:46:27.057 READ: bw=55.1MiB/s (57.8MB/s), 2319KiB/s-2535KiB/s (2374kB/s-2596kB/s), io=554MiB (581MB), run=10002-10043msec 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:27.057 bdev_null0 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:27.057 [2024-10-13 14:42:29.498667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:27.057 bdev_null1 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:27.057 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:46:27.058 { 00:46:27.058 "params": { 00:46:27.058 "name": "Nvme$subsystem", 00:46:27.058 "trtype": "$TEST_TRANSPORT", 00:46:27.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:27.058 "adrfam": "ipv4", 00:46:27.058 "trsvcid": "$NVMF_PORT", 00:46:27.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:27.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:27.058 "hdgst": ${hdgst:-false}, 00:46:27.058 "ddgst": ${ddgst:-false} 00:46:27.058 }, 00:46:27.058 "method": "bdev_nvme_attach_controller" 00:46:27.058 } 00:46:27.058 EOF 00:46:27.058 )") 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:46:27.058 { 00:46:27.058 "params": { 00:46:27.058 "name": "Nvme$subsystem", 00:46:27.058 "trtype": "$TEST_TRANSPORT", 00:46:27.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:27.058 "adrfam": "ipv4", 00:46:27.058 "trsvcid": "$NVMF_PORT", 00:46:27.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:27.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:27.058 "hdgst": ${hdgst:-false}, 00:46:27.058 "ddgst": ${ddgst:-false} 00:46:27.058 }, 00:46:27.058 "method": "bdev_nvme_attach_controller" 00:46:27.058 } 00:46:27.058 EOF 00:46:27.058 )") 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:46:27.058 "params": { 00:46:27.058 "name": "Nvme0", 00:46:27.058 "trtype": "tcp", 00:46:27.058 "traddr": "10.0.0.2", 00:46:27.058 "adrfam": "ipv4", 00:46:27.058 "trsvcid": "4420", 00:46:27.058 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:27.058 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:27.058 "hdgst": false, 00:46:27.058 "ddgst": false 00:46:27.058 }, 00:46:27.058 "method": "bdev_nvme_attach_controller" 00:46:27.058 },{ 00:46:27.058 "params": { 00:46:27.058 "name": "Nvme1", 00:46:27.058 "trtype": "tcp", 00:46:27.058 "traddr": "10.0.0.2", 00:46:27.058 "adrfam": "ipv4", 00:46:27.058 "trsvcid": "4420", 00:46:27.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:27.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:27.058 "hdgst": false, 00:46:27.058 "ddgst": false 00:46:27.058 }, 00:46:27.058 "method": "bdev_nvme_attach_controller" 00:46:27.058 }' 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:27.058 14:42:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:27.058 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:46:27.058 ... 00:46:27.058 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:46:27.058 ... 00:46:27.058 fio-3.35 00:46:27.058 Starting 4 threads 00:46:32.348 00:46:32.348 filename0: (groupid=0, jobs=1): err= 0: pid=2112316: Sun Oct 13 14:42:35 2024 00:46:32.348 read: IOPS=2968, BW=23.2MiB/s (24.3MB/s)(116MiB/5001msec) 00:46:32.348 slat (nsec): min=5644, max=65062, avg=8555.88, stdev=2767.85 00:46:32.348 clat (usec): min=1210, max=5083, avg=2673.19, stdev=185.20 00:46:32.348 lat (usec): min=1219, max=5092, avg=2681.75, stdev=185.13 00:46:32.348 clat percentiles (usec): 00:46:32.348 | 1.00th=[ 2089], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2638], 00:46:32.348 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:46:32.348 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2737], 95.00th=[ 2900], 00:46:32.348 | 99.00th=[ 3392], 99.50th=[ 3687], 99.90th=[ 4080], 99.95th=[ 4178], 00:46:32.348 | 99.99th=[ 5080] 00:46:32.348 bw ( KiB/s): min=23583, max=23872, per=25.06%, avg=23758.11, stdev=91.61, samples=9 00:46:32.348 iops : min= 2947, max= 2984, avg=2969.67, stdev=11.66, samples=9 00:46:32.348 lat (msec) : 2=0.77%, 4=99.07%, 10=0.16% 00:46:32.348 cpu : usr=96.30%, sys=3.44%, ctx=6, majf=0, minf=34 00:46:32.348 IO depths : 1=0.1%, 2=0.1%, 4=71.3%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:32.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:32.348 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:32.348 issued rwts: total=14843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:32.348 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:32.348 filename0: (groupid=0, jobs=1): err= 0: pid=2112317: Sun Oct 13 14:42:35 2024 00:46:32.348 read: IOPS=2980, BW=23.3MiB/s (24.4MB/s)(116MiB/5001msec) 00:46:32.348 slat (nsec): min=5645, max=75337, avg=6728.42, stdev=2393.10 00:46:32.348 clat (usec): min=979, max=4933, avg=2667.75, stdev=222.67 00:46:32.348 lat (usec): min=985, max=4957, avg=2674.48, stdev=222.79 00:46:32.348 clat percentiles (usec): 00:46:32.348 | 1.00th=[ 1991], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2638], 00:46:32.348 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:46:32.348 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2769], 95.00th=[ 2933], 00:46:32.348 | 99.00th=[ 3589], 99.50th=[ 3687], 99.90th=[ 4047], 99.95th=[ 4621], 00:46:32.348 | 99.99th=[ 4686] 00:46:32.348 bw ( KiB/s): min=23472, max=24208, per=25.13%, avg=23827.56, stdev=223.83, samples=9 00:46:32.348 iops : min= 2934, max= 3026, avg=2978.44, stdev=27.98, samples=9 00:46:32.348 lat (usec) : 1000=0.03% 00:46:32.348 lat (msec) : 2=1.07%, 4=98.75%, 10=0.15% 00:46:32.348 cpu : usr=95.80%, sys=3.74%, ctx=157, majf=0, minf=46 00:46:32.348 IO depths : 1=0.1%, 2=0.2%, 4=67.1%, 8=32.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:32.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:32.348 complete : 0=0.0%, 4=96.4%, 8=3.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:32.348 issued rwts: total=14906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:32.348 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:32.348 filename1: (groupid=0, jobs=1): err= 0: pid=2112318: Sun Oct 13 14:42:35 2024 00:46:32.348 read: IOPS=2942, BW=23.0MiB/s (24.1MB/s)(115MiB/5001msec) 00:46:32.348 slat (nsec): min=5645, max=54442, avg=6508.27, stdev=2411.12 00:46:32.348 clat (usec): min=1180, max=4584, avg=2700.26, stdev=205.67 00:46:32.348 lat (usec): min=1185, max=4589, avg=2706.77, stdev=205.56 00:46:32.348 clat percentiles (usec): 00:46:32.348 | 1.00th=[ 2311], 5.00th=[ 2507], 10.00th=[ 2606], 20.00th=[ 2638], 00:46:32.348 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:46:32.348 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2769], 95.00th=[ 2933], 00:46:32.348 | 99.00th=[ 3851], 99.50th=[ 4015], 99.90th=[ 4228], 99.95th=[ 4359], 00:46:32.348 | 99.99th=[ 4555] 00:46:32.348 bw ( KiB/s): min=23408, max=23664, per=24.84%, avg=23552.00, stdev=85.79, samples=9 00:46:32.348 iops : min= 2926, max= 2958, avg=2944.00, stdev=10.72, samples=9 00:46:32.348 lat (msec) : 2=0.28%, 4=99.19%, 10=0.53% 00:46:32.348 cpu : usr=96.68%, sys=3.10%, ctx=7, majf=0, minf=43 00:46:32.348 IO depths : 1=0.1%, 2=0.1%, 4=74.0%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:32.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:32.348 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:32.348 issued rwts: total=14716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:32.348 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:32.348 filename1: (groupid=0, jobs=1): err= 0: pid=2112319: Sun Oct 13 14:42:35 2024 00:46:32.348 read: IOPS=2961, BW=23.1MiB/s (24.3MB/s)(116MiB/5001msec) 00:46:32.348 slat (nsec): min=5647, max=75806, avg=6675.99, stdev=2544.61 00:46:32.348 clat (usec): min=1110, max=5188, avg=2685.38, stdev=210.42 00:46:32.348 lat (usec): min=1116, max=5213, avg=2692.06, stdev=210.56 00:46:32.348 clat percentiles (usec): 00:46:32.348 | 1.00th=[ 2114], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2638], 00:46:32.348 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:46:32.348 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2769], 95.00th=[ 2900], 00:46:32.348 | 99.00th=[ 3556], 99.50th=[ 3884], 99.90th=[ 4621], 99.95th=[ 5145], 00:46:32.348 | 99.99th=[ 5211] 00:46:32.348 bw ( KiB/s): min=23408, max=23808, per=24.97%, avg=23672.89, stdev=120.83, samples=9 00:46:32.348 iops : min= 2926, max= 2976, avg=2959.11, stdev=15.10, samples=9 00:46:32.348 lat (msec) : 2=0.55%, 4=99.07%, 10=0.38% 00:46:32.348 cpu : usr=96.64%, sys=3.14%, ctx=6, majf=0, minf=57 00:46:32.348 IO depths : 1=0.1%, 2=0.1%, 4=68.2%, 8=31.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:32.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:32.348 complete : 0=0.0%, 4=95.7%, 8=4.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:32.348 issued rwts: total=14809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:32.348 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:32.348 00:46:32.348 Run status group 0 (all jobs): 00:46:32.348 READ: bw=92.6MiB/s (97.1MB/s), 23.0MiB/s-23.3MiB/s (24.1MB/s-24.4MB/s), io=463MiB (486MB), run=5001-5001msec 00:46:32.348 14:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:46:32.348 14:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:46:32.348 14:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:32.348 14:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:32.348 14:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:46:32.348 14:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:32.348 14:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:32.348 14:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:32.348 14:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:32.348 14:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:32.348 14:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:32.348 14:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:32.348 14:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:32.349 14:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:32.349 14:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:46:32.349 14:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:46:32.349 14:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:32.349 14:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:32.349 14:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:32.349 14:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:32.349 14:42:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:46:32.349 14:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:32.349 14:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:32.349 14:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:32.349 00:46:32.349 real 0m24.828s 00:46:32.349 user 5m20.331s 00:46:32.349 sys 0m4.695s 00:46:32.349 14:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:32.349 14:42:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:32.349 ************************************ 00:46:32.349 END TEST fio_dif_rand_params 00:46:32.349 ************************************ 00:46:32.349 14:42:36 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:46:32.349 14:42:36 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:46:32.349 14:42:36 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:32.349 14:42:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:32.349 ************************************ 00:46:32.349 START TEST fio_dif_digest 00:46:32.349 ************************************ 00:46:32.349 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:46:32.349 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:46:32.349 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:46:32.349 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:46:32.349 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:46:32.349 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:46:32.349 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:46:32.349 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:46:32.349 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:46:32.349 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:46:32.349 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:46:32.349 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:46:32.349 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:46:32.349 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:46:32.349 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:46:32.349 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:46:32.349 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:46:32.349 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:32.349 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:32.610 bdev_null0 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:32.610 [2024-10-13 14:42:36.087453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:46:32.610 { 00:46:32.610 "params": { 00:46:32.610 "name": "Nvme$subsystem", 00:46:32.610 "trtype": "$TEST_TRANSPORT", 00:46:32.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:32.610 "adrfam": "ipv4", 00:46:32.610 "trsvcid": "$NVMF_PORT", 00:46:32.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:32.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:32.610 "hdgst": ${hdgst:-false}, 00:46:32.610 "ddgst": ${ddgst:-false} 00:46:32.610 }, 00:46:32.610 "method": "bdev_nvme_attach_controller" 00:46:32.610 } 00:46:32.610 EOF 00:46:32.610 )") 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:46:32.610 "params": { 00:46:32.610 "name": "Nvme0", 00:46:32.610 "trtype": "tcp", 00:46:32.610 "traddr": "10.0.0.2", 00:46:32.610 "adrfam": "ipv4", 00:46:32.610 "trsvcid": "4420", 00:46:32.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:32.610 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:32.610 "hdgst": true, 00:46:32.610 "ddgst": true 00:46:32.610 }, 00:46:32.610 "method": "bdev_nvme_attach_controller" 00:46:32.610 }' 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:46:32.610 14:42:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:32.870 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:46:32.871 ... 00:46:32.871 fio-3.35 00:46:32.871 Starting 3 threads 00:46:45.101 00:46:45.101 filename0: (groupid=0, jobs=1): err= 0: pid=2113666: Sun Oct 13 14:42:47 2024 00:46:45.101 read: IOPS=293, BW=36.7MiB/s (38.5MB/s)(369MiB/10047msec) 00:46:45.102 slat (nsec): min=5911, max=31722, avg=6925.00, stdev=1079.81 00:46:45.102 clat (usec): min=6786, max=51667, avg=10200.68, stdev=2263.79 00:46:45.102 lat (usec): min=6793, max=51674, avg=10207.60, stdev=2263.80 00:46:45.102 clat percentiles (usec): 00:46:45.102 | 1.00th=[ 8029], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:46:45.102 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:46:45.102 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11469], 00:46:45.102 | 99.00th=[12125], 99.50th=[12387], 99.90th=[51643], 99.95th=[51643], 00:46:45.102 | 99.99th=[51643] 00:46:45.102 bw ( KiB/s): min=34560, max=38656, per=33.36%, avg=37708.80, stdev=1057.47, samples=20 00:46:45.102 iops : min= 270, max= 302, avg=294.60, stdev= 8.26, samples=20 00:46:45.102 lat (msec) : 10=45.05%, 20=54.68%, 50=0.03%, 100=0.24% 00:46:45.102 cpu : usr=94.53%, sys=5.25%, ctx=11, majf=0, minf=116 00:46:45.102 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:45.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:45.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:45.102 issued rwts: total=2948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:45.102 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:45.102 filename0: (groupid=0, jobs=1): err= 0: pid=2113667: Sun Oct 13 14:42:47 2024 00:46:45.102 read: IOPS=307, BW=38.5MiB/s (40.4MB/s)(387MiB/10046msec) 00:46:45.102 slat (nsec): min=5878, max=32538, avg=6917.38, stdev=1167.58 00:46:45.102 clat (usec): min=6325, max=48350, avg=9721.23, stdev=1253.35 00:46:45.102 lat (usec): min=6331, max=48357, avg=9728.14, stdev=1253.32 00:46:45.102 clat percentiles (usec): 00:46:45.102 | 1.00th=[ 7242], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:46:45.102 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:46:45.102 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10683], 95.00th=[11076], 00:46:45.102 | 99.00th=[11600], 99.50th=[11731], 99.90th=[12125], 99.95th=[45876], 00:46:45.102 | 99.99th=[48497] 00:46:45.102 bw ( KiB/s): min=38400, max=40704, per=35.00%, avg=39564.80, stdev=721.57, samples=20 00:46:45.102 iops : min= 300, max= 318, avg=309.10, stdev= 5.64, samples=20 00:46:45.102 lat (msec) : 10=65.11%, 20=34.82%, 50=0.06% 00:46:45.102 cpu : usr=95.11%, sys=4.66%, ctx=17, majf=0, minf=122 00:46:45.102 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:45.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:45.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:45.102 issued rwts: total=3093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:45.102 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:45.102 filename0: (groupid=0, jobs=1): err= 0: pid=2113668: Sun Oct 13 14:42:47 2024 00:46:45.102 read: IOPS=281, BW=35.2MiB/s (37.0MB/s)(354MiB/10044msec) 00:46:45.102 slat (nsec): min=5862, max=42619, avg=7011.39, stdev=1286.00 00:46:45.102 clat (usec): min=7150, max=52875, avg=10600.63, stdev=2181.43 00:46:45.102 lat (usec): min=7158, max=52881, avg=10607.64, stdev=2181.43 00:46:45.102 clat percentiles (usec): 00:46:45.102 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9765], 00:46:45.102 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:46:45.102 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:46:45.102 | 99.00th=[12518], 99.50th=[12780], 99.90th=[52691], 99.95th=[52691], 00:46:45.102 | 99.99th=[52691] 00:46:45.102 bw ( KiB/s): min=33536, max=37376, per=32.06%, avg=36236.80, stdev=948.73, samples=20 00:46:45.102 iops : min= 262, max= 292, avg=283.10, stdev= 7.41, samples=20 00:46:45.102 lat (msec) : 10=27.33%, 20=72.42%, 50=0.07%, 100=0.18% 00:46:45.102 cpu : usr=94.74%, sys=4.70%, ctx=582, majf=0, minf=146 00:46:45.102 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:45.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:45.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:45.102 issued rwts: total=2832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:45.102 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:45.102 00:46:45.102 Run status group 0 (all jobs): 00:46:45.102 READ: bw=110MiB/s (116MB/s), 35.2MiB/s-38.5MiB/s (37.0MB/s-40.4MB/s), io=1109MiB (1163MB), run=10044-10047msec 00:46:45.102 14:42:47 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:46:45.102 14:42:47 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:46:45.102 14:42:47 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:46:45.102 14:42:47 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:45.102 14:42:47 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:46:45.102 14:42:47 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:45.102 14:42:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:45.102 14:42:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:45.102 14:42:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:45.102 14:42:47 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:45.102 14:42:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:45.102 14:42:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:45.102 14:42:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:45.102 00:46:45.102 real 0m11.187s 00:46:45.102 user 0m41.998s 00:46:45.102 sys 0m1.790s 00:46:45.102 14:42:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:45.102 14:42:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:46:45.102 ************************************ 00:46:45.102 END TEST fio_dif_digest 00:46:45.102 ************************************ 00:46:45.102 14:42:47 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:46:45.102 14:42:47 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:46:45.102 14:42:47 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:46:45.102 14:42:47 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:46:45.102 14:42:47 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:45.102 14:42:47 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:46:45.102 14:42:47 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:45.102 14:42:47 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:45.102 rmmod nvme_tcp 00:46:45.102 rmmod nvme_fabrics 00:46:45.102 rmmod nvme_keyring 00:46:45.102 14:42:47 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:45.102 14:42:47 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:46:45.102 14:42:47 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:46:45.102 14:42:47 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 2102944 ']' 00:46:45.102 14:42:47 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 2102944 00:46:45.102 14:42:47 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 2102944 ']' 00:46:45.102 14:42:47 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 2102944 00:46:45.102 14:42:47 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:46:45.102 14:42:47 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:45.102 14:42:47 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2102944 00:46:45.102 14:42:47 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:45.102 14:42:47 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:45.102 14:42:47 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2102944' 00:46:45.102 killing process with pid 2102944 00:46:45.102 14:42:47 nvmf_dif -- common/autotest_common.sh@969 -- # kill 2102944 00:46:45.102 14:42:47 nvmf_dif -- common/autotest_common.sh@974 -- # wait 2102944 00:46:45.102 14:42:47 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:46:45.102 14:42:47 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:47.649 Waiting for block devices as requested 00:46:47.649 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:47.649 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:47.649 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:47.649 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:47.649 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:47.649 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:47.909 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:47.909 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:47.909 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:46:48.169 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:48.169 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:48.429 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:48.429 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:48.429 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:48.689 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:48.689 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:48.689 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:48.950 14:42:52 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:46:48.950 14:42:52 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:46:48.950 14:42:52 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:46:48.950 14:42:52 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:46:48.950 14:42:52 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:46:48.950 14:42:52 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:46:48.950 14:42:52 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:48.950 14:42:52 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:48.950 14:42:52 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:48.950 14:42:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:48.950 14:42:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:51.498 14:42:54 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:51.498 00:46:51.498 real 1m18.879s 00:46:51.498 user 7m56.817s 00:46:51.498 sys 0m22.348s 00:46:51.498 14:42:54 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:51.498 14:42:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:51.498 ************************************ 00:46:51.498 END TEST nvmf_dif 00:46:51.498 ************************************ 00:46:51.498 14:42:54 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:46:51.498 14:42:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:46:51.498 14:42:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:51.498 14:42:54 -- common/autotest_common.sh@10 -- # set +x 00:46:51.498 ************************************ 00:46:51.498 START TEST nvmf_abort_qd_sizes 00:46:51.498 ************************************ 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:46:51.498 * Looking for test storage... 00:46:51.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:46:51.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:51.498 --rc genhtml_branch_coverage=1 00:46:51.498 --rc genhtml_function_coverage=1 00:46:51.498 --rc genhtml_legend=1 00:46:51.498 --rc geninfo_all_blocks=1 00:46:51.498 --rc geninfo_unexecuted_blocks=1 00:46:51.498 00:46:51.498 ' 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:46:51.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:51.498 --rc genhtml_branch_coverage=1 00:46:51.498 --rc genhtml_function_coverage=1 00:46:51.498 --rc genhtml_legend=1 00:46:51.498 --rc geninfo_all_blocks=1 00:46:51.498 --rc geninfo_unexecuted_blocks=1 00:46:51.498 00:46:51.498 ' 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:46:51.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:51.498 --rc genhtml_branch_coverage=1 00:46:51.498 --rc genhtml_function_coverage=1 00:46:51.498 --rc genhtml_legend=1 00:46:51.498 --rc geninfo_all_blocks=1 00:46:51.498 --rc geninfo_unexecuted_blocks=1 00:46:51.498 00:46:51.498 ' 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:46:51.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:51.498 --rc genhtml_branch_coverage=1 00:46:51.498 --rc genhtml_function_coverage=1 00:46:51.498 --rc genhtml_legend=1 00:46:51.498 --rc geninfo_all_blocks=1 00:46:51.498 --rc geninfo_unexecuted_blocks=1 00:46:51.498 00:46:51.498 ' 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:51.498 14:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:51.499 14:42:54 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:51.499 14:42:54 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:51.499 14:42:54 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:51.499 14:42:54 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:46:51.499 14:42:54 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:51.499 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:46:51.499 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:51.499 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:51.499 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:51.499 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:51.499 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:51.499 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:51.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:51.499 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:51.499 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:51.499 14:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:51.499 14:42:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:46:51.499 14:42:55 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:46:51.499 14:42:55 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:51.499 14:42:55 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:46:51.499 14:42:55 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:46:51.499 14:42:55 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:46:51.499 14:42:55 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:51.499 14:42:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:51.499 14:42:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:51.499 14:42:55 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:46:51.499 14:42:55 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:46:51.499 14:42:55 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:46:51.499 14:42:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:46:59.644 Found 0000:31:00.0 (0x8086 - 0x159b) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:46:59.644 Found 0000:31:00.1 (0x8086 - 0x159b) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:46:59.644 Found net devices under 0000:31:00.0: cvl_0_0 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:46:59.644 Found net devices under 0000:31:00.1: cvl_0_1 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:59.644 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:59.645 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:59.645 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:59.645 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:59.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:59.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:46:59.645 00:46:59.645 --- 10.0.0.2 ping statistics --- 00:46:59.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:59.645 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:46:59.645 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:59.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:59.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:46:59.645 00:46:59.645 --- 10.0.0.1 ping statistics --- 00:46:59.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:59.645 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:46:59.645 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:59.645 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:46:59.645 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:46:59.645 14:43:02 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:47:02.193 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:47:02.193 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:47:02.193 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:47:02.193 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:47:02.193 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:47:02.454 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:47:02.454 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:47:02.454 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:47:02.454 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:47:02.454 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:47:02.454 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:47:02.454 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:47:02.454 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:47:02.454 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:47:02.454 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:47:02.454 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:47:02.454 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:47:03.027 14:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:03.027 14:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:47:03.027 14:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:47:03.027 14:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:03.027 14:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:47:03.027 14:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:47:03.027 14:43:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:47:03.027 14:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:47:03.027 14:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:03.027 14:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:03.027 14:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=2123206 00:47:03.027 14:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 2123206 00:47:03.027 14:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:47:03.027 14:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 2123206 ']' 00:47:03.027 14:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:03.027 14:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:03.027 14:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:03.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:03.027 14:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:03.027 14:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:03.027 [2024-10-13 14:43:06.560184] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:47:03.027 [2024-10-13 14:43:06.560249] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:03.027 [2024-10-13 14:43:06.703299] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:03.288 [2024-10-13 14:43:06.754408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:03.289 [2024-10-13 14:43:06.783907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:03.289 [2024-10-13 14:43:06.783965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:03.289 [2024-10-13 14:43:06.783973] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:03.289 [2024-10-13 14:43:06.783981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:03.289 [2024-10-13 14:43:06.783987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:03.289 [2024-10-13 14:43:06.785863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:03.289 [2024-10-13 14:43:06.786020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:03.289 [2024-10-13 14:43:06.786185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:03.289 [2024-10-13 14:43:06.786185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:03.861 14:43:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:03.861 ************************************ 00:47:03.861 START TEST spdk_target_abort 00:47:03.861 ************************************ 00:47:03.861 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:47:03.861 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:47:03.861 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:47:03.861 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:03.861 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:04.123 spdk_targetn1 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:04.123 [2024-10-13 14:43:07.773618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:04.123 [2024-10-13 14:43:07.813860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:04.123 14:43:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:04.713 [2024-10-13 14:43:08.195632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:672 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:47:04.713 [2024-10-13 14:43:08.195663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:47:04.713 [2024-10-13 14:43:08.235690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1976 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:47:04.713 [2024-10-13 14:43:08.235716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00f9 p:1 m:0 dnr:0 00:47:04.713 [2024-10-13 14:43:08.246593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2392 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:47:04.713 [2024-10-13 14:43:08.246615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:47:04.713 [2024-10-13 14:43:08.270717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3192 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:47:04.714 [2024-10-13 14:43:08.270741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0090 p:0 m:0 dnr:0 00:47:04.714 [2024-10-13 14:43:08.278633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3448 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:47:04.714 [2024-10-13 14:43:08.278653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00b1 p:0 m:0 dnr:0 00:47:08.104 Initializing NVMe Controllers 00:47:08.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:47:08.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:08.104 Initialization complete. Launching workers. 00:47:08.104 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11211, failed: 5 00:47:08.104 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2026, failed to submit 9190 00:47:08.104 success 750, unsuccessful 1276, failed 0 00:47:08.104 14:43:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:08.104 14:43:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:08.104 [2024-10-13 14:43:11.528259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:688 len:8 PRP1 0x200004e54000 PRP2 0x0 00:47:08.104 [2024-10-13 14:43:11.528307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:47:08.104 [2024-10-13 14:43:11.566346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:1576 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:47:08.105 [2024-10-13 14:43:11.566374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:00d2 p:1 m:0 dnr:0 00:47:08.105 [2024-10-13 14:43:11.620192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:2904 len:8 PRP1 0x200004e50000 PRP2 0x0 00:47:08.105 [2024-10-13 14:43:11.620216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:47:08.105 [2024-10-13 14:43:11.636195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:3256 len:8 PRP1 0x200004e46000 PRP2 0x0 00:47:08.105 [2024-10-13 14:43:11.636217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:00a9 p:0 m:0 dnr:0 00:47:08.105 [2024-10-13 14:43:11.652222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:3648 len:8 PRP1 0x200004e46000 PRP2 0x0 00:47:08.105 [2024-10-13 14:43:11.652244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:00d1 p:0 m:0 dnr:0 00:47:10.685 [2024-10-13 14:43:13.862907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:54976 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:47:10.685 [2024-10-13 14:43:13.862935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:00d9 p:1 m:0 dnr:0 00:47:10.685 [2024-10-13 14:43:14.334970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:65800 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:47:10.685 [2024-10-13 14:43:14.334994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:0022 p:1 m:0 dnr:0 00:47:10.945 Initializing NVMe Controllers 00:47:10.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:47:10.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:10.945 Initialization complete. Launching workers. 00:47:10.945 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8692, failed: 7 00:47:10.945 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1209, failed to submit 7490 00:47:10.945 success 361, unsuccessful 848, failed 0 00:47:10.945 14:43:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:10.945 14:43:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:12.854 [2024-10-13 14:43:16.560559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:133 nsid:1 lba:198088 len:8 PRP1 0x200004b00000 PRP2 0x0 00:47:12.854 [2024-10-13 14:43:16.560590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:133 cdw0:0 sqhd:0090 p:1 m:0 dnr:0 00:47:14.234 Initializing NVMe Controllers 00:47:14.234 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:47:14.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:14.234 Initialization complete. Launching workers. 00:47:14.234 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43470, failed: 1 00:47:14.234 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2730, failed to submit 40741 00:47:14.234 success 592, unsuccessful 2138, failed 0 00:47:14.234 14:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:47:14.234 14:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:14.234 14:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:14.234 14:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:14.234 14:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:47:14.234 14:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:14.234 14:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:16.144 14:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:16.144 14:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2123206 00:47:16.144 14:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 2123206 ']' 00:47:16.144 14:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 2123206 00:47:16.144 14:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:47:16.144 14:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:16.144 14:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2123206 00:47:16.145 14:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:47:16.145 14:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:47:16.145 14:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2123206' 00:47:16.145 killing process with pid 2123206 00:47:16.145 14:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 2123206 00:47:16.145 14:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 2123206 00:47:16.405 00:47:16.405 real 0m12.421s 00:47:16.405 user 0m50.024s 00:47:16.405 sys 0m2.144s 00:47:16.405 14:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:16.405 14:43:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:16.405 ************************************ 00:47:16.405 END TEST spdk_target_abort 00:47:16.405 ************************************ 00:47:16.405 14:43:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:47:16.405 14:43:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:47:16.405 14:43:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:16.405 14:43:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:16.405 ************************************ 00:47:16.405 START TEST kernel_target_abort 00:47:16.405 ************************************ 00:47:16.405 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:47:16.405 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:47:16.405 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:47:16.405 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:47:16.405 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:47:16.405 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:16.405 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:16.405 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:47:16.405 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:16.405 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:47:16.405 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:47:16.405 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:47:16.406 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:47:16.406 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:47:16.406 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:47:16.406 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:47:16.406 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:47:16.406 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:47:16.406 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:47:16.406 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:47:16.406 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:47:16.406 14:43:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:47:16.406 14:43:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:47:20.613 Waiting for block devices as requested 00:47:20.613 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:20.613 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:20.613 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:20.613 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:20.613 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:20.613 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:20.613 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:20.614 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:20.614 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:47:20.874 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:20.874 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:20.874 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:21.134 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:21.134 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:21.134 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:21.394 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:21.394 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:47:21.655 No valid GPT data, bailing 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:47:21.655 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:47:21.916 00:47:21.916 Discovery Log Number of Records 2, Generation counter 2 00:47:21.916 =====Discovery Log Entry 0====== 00:47:21.916 trtype: tcp 00:47:21.916 adrfam: ipv4 00:47:21.916 subtype: current discovery subsystem 00:47:21.916 treq: not specified, sq flow control disable supported 00:47:21.916 portid: 1 00:47:21.916 trsvcid: 4420 00:47:21.916 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:47:21.916 traddr: 10.0.0.1 00:47:21.916 eflags: none 00:47:21.916 sectype: none 00:47:21.916 =====Discovery Log Entry 1====== 00:47:21.916 trtype: tcp 00:47:21.916 adrfam: ipv4 00:47:21.916 subtype: nvme subsystem 00:47:21.916 treq: not specified, sq flow control disable supported 00:47:21.916 portid: 1 00:47:21.916 trsvcid: 4420 00:47:21.916 subnqn: nqn.2016-06.io.spdk:testnqn 00:47:21.916 traddr: 10.0.0.1 00:47:21.916 eflags: none 00:47:21.916 sectype: none 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:21.916 14:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:25.213 Initializing NVMe Controllers 00:47:25.213 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:47:25.213 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:25.213 Initialization complete. Launching workers. 00:47:25.213 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67380, failed: 0 00:47:25.213 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67380, failed to submit 0 00:47:25.213 success 0, unsuccessful 67380, failed 0 00:47:25.213 14:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:25.213 14:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:28.512 Initializing NVMe Controllers 00:47:28.512 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:47:28.512 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:28.512 Initialization complete. Launching workers. 00:47:28.512 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 114064, failed: 0 00:47:28.512 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28722, failed to submit 85342 00:47:28.512 success 0, unsuccessful 28722, failed 0 00:47:28.512 14:43:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:28.513 14:43:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:31.817 Initializing NVMe Controllers 00:47:31.817 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:47:31.817 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:31.817 Initialization complete. Launching workers. 00:47:31.817 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146040, failed: 0 00:47:31.817 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36542, failed to submit 109498 00:47:31.817 success 0, unsuccessful 36542, failed 0 00:47:31.817 14:43:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:47:31.817 14:43:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:47:31.817 14:43:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:47:31.817 14:43:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:47:31.817 14:43:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:47:31.817 14:43:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:47:31.817 14:43:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:47:31.817 14:43:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:47:31.817 14:43:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:47:31.817 14:43:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:47:35.120 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:47:35.120 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:47:35.120 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:47:35.120 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:47:35.120 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:47:35.120 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:47:35.120 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:47:35.120 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:47:35.120 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:47:35.120 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:47:35.120 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:47:35.120 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:47:35.120 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:47:35.381 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:47:35.381 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:47:35.381 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:47:37.292 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:47:37.292 00:47:37.292 real 0m20.938s 00:47:37.292 user 0m9.917s 00:47:37.292 sys 0m6.386s 00:47:37.292 14:43:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:37.292 14:43:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:37.292 ************************************ 00:47:37.292 END TEST kernel_target_abort 00:47:37.292 ************************************ 00:47:37.292 14:43:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:47:37.292 14:43:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:47:37.292 14:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:47:37.292 14:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:47:37.292 14:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:37.292 14:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:47:37.292 14:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:37.292 14:43:40 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:37.292 rmmod nvme_tcp 00:47:37.292 rmmod nvme_fabrics 00:47:37.552 rmmod nvme_keyring 00:47:37.552 14:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:37.552 14:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:47:37.552 14:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:47:37.552 14:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 2123206 ']' 00:47:37.552 14:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 2123206 00:47:37.552 14:43:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 2123206 ']' 00:47:37.552 14:43:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 2123206 00:47:37.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2123206) - No such process 00:47:37.552 14:43:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 2123206 is not found' 00:47:37.552 Process with pid 2123206 is not found 00:47:37.552 14:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:47:37.552 14:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:47:40.849 Waiting for block devices as requested 00:47:40.849 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:40.849 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:41.109 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:41.109 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:41.109 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:41.369 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:41.369 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:41.369 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:41.629 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:47:41.629 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:47:41.889 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:47:41.889 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:47:41.889 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:47:42.148 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:47:42.148 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:47:42.148 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:47:42.409 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:47:42.669 14:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:47:42.669 14:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:47:42.669 14:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:47:42.669 14:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:47:42.669 14:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:47:42.669 14:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:47:42.669 14:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:42.669 14:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:47:42.669 14:43:46 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:42.669 14:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:42.669 14:43:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:44.578 14:43:48 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:47:44.578 00:47:44.578 real 0m53.501s 00:47:44.578 user 1m5.457s 00:47:44.578 sys 0m19.733s 00:47:44.578 14:43:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:44.578 14:43:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:44.578 ************************************ 00:47:44.578 END TEST nvmf_abort_qd_sizes 00:47:44.578 ************************************ 00:47:44.869 14:43:48 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:47:44.869 14:43:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:47:44.869 14:43:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:44.869 14:43:48 -- common/autotest_common.sh@10 -- # set +x 00:47:44.869 ************************************ 00:47:44.869 START TEST keyring_file 00:47:44.869 ************************************ 00:47:44.869 14:43:48 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:47:44.869 * Looking for test storage... 00:47:44.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:47:44.869 14:43:48 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:47:44.869 14:43:48 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:47:44.869 14:43:48 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:47:44.869 14:43:48 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@345 -- # : 1 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@353 -- # local d=1 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@355 -- # echo 1 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@353 -- # local d=2 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@355 -- # echo 2 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:44.869 14:43:48 keyring_file -- scripts/common.sh@368 -- # return 0 00:47:44.869 14:43:48 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:44.869 14:43:48 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:47:44.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:44.869 --rc genhtml_branch_coverage=1 00:47:44.869 --rc genhtml_function_coverage=1 00:47:44.869 --rc genhtml_legend=1 00:47:44.869 --rc geninfo_all_blocks=1 00:47:44.869 --rc geninfo_unexecuted_blocks=1 00:47:44.869 00:47:44.869 ' 00:47:44.869 14:43:48 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:47:44.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:44.869 --rc genhtml_branch_coverage=1 00:47:44.869 --rc genhtml_function_coverage=1 00:47:44.869 --rc genhtml_legend=1 00:47:44.869 --rc geninfo_all_blocks=1 00:47:44.869 --rc geninfo_unexecuted_blocks=1 00:47:44.869 00:47:44.869 ' 00:47:44.869 14:43:48 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:47:44.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:44.869 --rc genhtml_branch_coverage=1 00:47:44.869 --rc genhtml_function_coverage=1 00:47:44.869 --rc genhtml_legend=1 00:47:44.869 --rc geninfo_all_blocks=1 00:47:44.869 --rc geninfo_unexecuted_blocks=1 00:47:44.869 00:47:44.869 ' 00:47:44.869 14:43:48 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:47:44.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:44.869 --rc genhtml_branch_coverage=1 00:47:44.869 --rc genhtml_function_coverage=1 00:47:44.869 --rc genhtml_legend=1 00:47:44.869 --rc geninfo_all_blocks=1 00:47:44.869 --rc geninfo_unexecuted_blocks=1 00:47:44.869 00:47:44.869 ' 00:47:44.869 14:43:48 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:47:44.869 14:43:48 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:44.869 14:43:48 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:47:44.869 14:43:48 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:44.869 14:43:48 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:44.869 14:43:48 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:44.869 14:43:48 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:44.869 14:43:48 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:44.869 14:43:48 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:44.869 14:43:48 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:44.869 14:43:48 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:44.869 14:43:48 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:44.869 14:43:48 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:45.130 14:43:48 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:47:45.130 14:43:48 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:45.130 14:43:48 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:45.130 14:43:48 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:45.130 14:43:48 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:45.130 14:43:48 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:45.130 14:43:48 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:45.130 14:43:48 keyring_file -- paths/export.sh@5 -- # export PATH 00:47:45.130 14:43:48 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@51 -- # : 0 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:45.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:45.130 14:43:48 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:47:45.130 14:43:48 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:47:45.130 14:43:48 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:47:45.130 14:43:48 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:47:45.130 14:43:48 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:47:45.130 14:43:48 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:47:45.130 14:43:48 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:47:45.130 14:43:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:45.130 14:43:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:47:45.130 14:43:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:45.130 14:43:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:45.130 14:43:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:45.130 14:43:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BZ36uKppEv 00:47:45.130 14:43:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@731 -- # python - 00:47:45.130 14:43:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BZ36uKppEv 00:47:45.130 14:43:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BZ36uKppEv 00:47:45.130 14:43:48 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.BZ36uKppEv 00:47:45.130 14:43:48 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:47:45.130 14:43:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:45.130 14:43:48 keyring_file -- keyring/common.sh@17 -- # name=key1 00:47:45.130 14:43:48 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:47:45.130 14:43:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:45.130 14:43:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:45.130 14:43:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tQ2hJVtebd 00:47:45.130 14:43:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:47:45.130 14:43:48 keyring_file -- nvmf/common.sh@731 -- # python - 00:47:45.130 14:43:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tQ2hJVtebd 00:47:45.130 14:43:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tQ2hJVtebd 00:47:45.130 14:43:48 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.tQ2hJVtebd 00:47:45.130 14:43:48 keyring_file -- keyring/file.sh@30 -- # tgtpid=2133742 00:47:45.130 14:43:48 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2133742 00:47:45.130 14:43:48 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:47:45.130 14:43:48 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2133742 ']' 00:47:45.130 14:43:48 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:45.130 14:43:48 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:45.130 14:43:48 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:45.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:45.130 14:43:48 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:45.130 14:43:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:45.130 [2024-10-13 14:43:48.755645] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:47:45.130 [2024-10-13 14:43:48.755700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2133742 ] 00:47:45.389 [2024-10-13 14:43:48.886409] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:45.390 [2024-10-13 14:43:48.933758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:45.390 [2024-10-13 14:43:48.952496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:47:45.959 14:43:49 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:45.959 [2024-10-13 14:43:49.537691] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:45.959 null0 00:47:45.959 [2024-10-13 14:43:49.569663] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:45.959 [2024-10-13 14:43:49.569958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:45.959 14:43:49 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:45.959 [2024-10-13 14:43:49.601656] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:47:45.959 request: 00:47:45.959 { 00:47:45.959 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:47:45.959 "secure_channel": false, 00:47:45.959 "listen_address": { 00:47:45.959 "trtype": "tcp", 00:47:45.959 "traddr": "127.0.0.1", 00:47:45.959 "trsvcid": "4420" 00:47:45.959 }, 00:47:45.959 "method": "nvmf_subsystem_add_listener", 00:47:45.959 "req_id": 1 00:47:45.959 } 00:47:45.959 Got JSON-RPC error response 00:47:45.959 response: 00:47:45.959 { 00:47:45.959 "code": -32602, 00:47:45.959 "message": "Invalid parameters" 00:47:45.959 } 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:47:45.959 14:43:49 keyring_file -- keyring/file.sh@47 -- # bperfpid=2133834 00:47:45.959 14:43:49 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2133834 /var/tmp/bperf.sock 00:47:45.959 14:43:49 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2133834 ']' 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:45.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:45.959 14:43:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:45.960 [2024-10-13 14:43:49.659680] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:47:45.960 [2024-10-13 14:43:49.659730] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2133834 ] 00:47:46.218 [2024-10-13 14:43:49.789848] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:46.218 [2024-10-13 14:43:49.838110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:46.218 [2024-10-13 14:43:49.856706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:46.787 14:43:50 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:46.787 14:43:50 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:47:46.787 14:43:50 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BZ36uKppEv 00:47:46.787 14:43:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BZ36uKppEv 00:47:47.048 14:43:50 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.tQ2hJVtebd 00:47:47.048 14:43:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.tQ2hJVtebd 00:47:47.308 14:43:50 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:47:47.308 14:43:50 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:47:47.308 14:43:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:47.308 14:43:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:47.308 14:43:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:47.570 14:43:51 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.BZ36uKppEv == \/\t\m\p\/\t\m\p\.\B\Z\3\6\u\K\p\p\E\v ]] 00:47:47.570 14:43:51 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:47:47.570 14:43:51 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:47:47.570 14:43:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:47.570 14:43:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:47.570 14:43:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:47.570 14:43:51 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.tQ2hJVtebd == \/\t\m\p\/\t\m\p\.\t\Q\2\h\J\V\t\e\b\d ]] 00:47:47.570 14:43:51 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:47:47.570 14:43:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:47.570 14:43:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:47.570 14:43:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:47.570 14:43:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:47.570 14:43:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:47.829 14:43:51 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:47:47.829 14:43:51 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:47:47.829 14:43:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:47.829 14:43:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:47.829 14:43:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:47.829 14:43:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:47.829 14:43:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:48.112 14:43:51 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:47:48.112 14:43:51 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:48.112 14:43:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:48.112 [2024-10-13 14:43:51.744683] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:48.444 nvme0n1 00:47:48.444 14:43:51 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:47:48.444 14:43:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:48.444 14:43:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:48.444 14:43:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:48.444 14:43:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:48.444 14:43:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:48.444 14:43:52 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:47:48.444 14:43:52 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:47:48.444 14:43:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:48.444 14:43:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:48.444 14:43:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:48.444 14:43:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:48.444 14:43:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:48.722 14:43:52 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:47:48.722 14:43:52 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:48.722 Running I/O for 1 seconds... 00:47:49.661 18800.00 IOPS, 73.44 MiB/s 00:47:49.661 Latency(us) 00:47:49.661 [2024-10-13T12:43:53.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:49.662 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:47:49.662 nvme0n1 : 1.00 18858.06 73.66 0.00 0.00 6775.80 2299.12 16641.28 00:47:49.662 [2024-10-13T12:43:53.369Z] =================================================================================================================== 00:47:49.662 [2024-10-13T12:43:53.369Z] Total : 18858.06 73.66 0.00 0.00 6775.80 2299.12 16641.28 00:47:49.662 { 00:47:49.662 "results": [ 00:47:49.662 { 00:47:49.662 "job": "nvme0n1", 00:47:49.662 "core_mask": "0x2", 00:47:49.662 "workload": "randrw", 00:47:49.662 "percentage": 50, 00:47:49.662 "status": "finished", 00:47:49.662 "queue_depth": 128, 00:47:49.662 "io_size": 4096, 00:47:49.662 "runtime": 1.003815, 00:47:49.662 "iops": 18858.056514397573, 00:47:49.662 "mibps": 73.66428325936552, 00:47:49.662 "io_failed": 0, 00:47:49.662 "io_timeout": 0, 00:47:49.662 "avg_latency_us": 6775.799119939835, 00:47:49.662 "min_latency_us": 2299.1246241229537, 00:47:49.662 "max_latency_us": 16641.282993651854 00:47:49.662 } 00:47:49.662 ], 00:47:49.662 "core_count": 1 00:47:49.662 } 00:47:49.662 14:43:53 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:49.662 14:43:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:49.923 14:43:53 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:47:49.923 14:43:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:49.923 14:43:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:49.923 14:43:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:49.923 14:43:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:49.923 14:43:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:50.183 14:43:53 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:47:50.183 14:43:53 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:47:50.183 14:43:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:50.183 14:43:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:50.183 14:43:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:50.183 14:43:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:50.183 14:43:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:50.183 14:43:53 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:47:50.183 14:43:53 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:50.183 14:43:53 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:47:50.183 14:43:53 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:50.183 14:43:53 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:47:50.183 14:43:53 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:50.183 14:43:53 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:47:50.183 14:43:53 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:50.183 14:43:53 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:50.183 14:43:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:50.443 [2024-10-13 14:43:54.032011] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:47:50.443 [2024-10-13 14:43:54.032072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d40d0 (107): Transport endpoint is not connected 00:47:50.443 [2024-10-13 14:43:54.033066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d40d0 (9): Bad file descriptor 00:47:50.443 [2024-10-13 14:43:54.034061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:47:50.443 [2024-10-13 14:43:54.034073] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:47:50.443 [2024-10-13 14:43:54.034078] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:47:50.443 [2024-10-13 14:43:54.034085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:47:50.443 request: 00:47:50.443 { 00:47:50.443 "name": "nvme0", 00:47:50.443 "trtype": "tcp", 00:47:50.443 "traddr": "127.0.0.1", 00:47:50.443 "adrfam": "ipv4", 00:47:50.443 "trsvcid": "4420", 00:47:50.443 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:50.443 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:50.443 "prchk_reftag": false, 00:47:50.443 "prchk_guard": false, 00:47:50.443 "hdgst": false, 00:47:50.443 "ddgst": false, 00:47:50.443 "psk": "key1", 00:47:50.443 "allow_unrecognized_csi": false, 00:47:50.443 "method": "bdev_nvme_attach_controller", 00:47:50.443 "req_id": 1 00:47:50.443 } 00:47:50.443 Got JSON-RPC error response 00:47:50.443 response: 00:47:50.443 { 00:47:50.443 "code": -5, 00:47:50.443 "message": "Input/output error" 00:47:50.443 } 00:47:50.443 14:43:54 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:47:50.443 14:43:54 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:47:50.443 14:43:54 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:47:50.443 14:43:54 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:47:50.443 14:43:54 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:47:50.443 14:43:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:50.443 14:43:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:50.443 14:43:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:50.443 14:43:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:50.443 14:43:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:50.703 14:43:54 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:47:50.703 14:43:54 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:47:50.703 14:43:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:50.703 14:43:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:50.703 14:43:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:50.703 14:43:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:50.703 14:43:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:50.703 14:43:54 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:47:50.703 14:43:54 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:47:50.703 14:43:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:50.963 14:43:54 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:47:50.963 14:43:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:47:51.222 14:43:54 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:47:51.222 14:43:54 keyring_file -- keyring/file.sh@78 -- # jq length 00:47:51.222 14:43:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:51.222 14:43:54 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:47:51.222 14:43:54 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.BZ36uKppEv 00:47:51.222 14:43:54 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.BZ36uKppEv 00:47:51.222 14:43:54 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:47:51.222 14:43:54 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.BZ36uKppEv 00:47:51.222 14:43:54 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:47:51.222 14:43:54 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:51.222 14:43:54 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:47:51.222 14:43:54 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:51.222 14:43:54 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BZ36uKppEv 00:47:51.222 14:43:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BZ36uKppEv 00:47:51.482 [2024-10-13 14:43:55.034421] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BZ36uKppEv': 0100660 00:47:51.482 [2024-10-13 14:43:55.034438] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:47:51.482 request: 00:47:51.482 { 00:47:51.482 "name": "key0", 00:47:51.482 "path": "/tmp/tmp.BZ36uKppEv", 00:47:51.482 "method": "keyring_file_add_key", 00:47:51.482 "req_id": 1 00:47:51.482 } 00:47:51.482 Got JSON-RPC error response 00:47:51.482 response: 00:47:51.482 { 00:47:51.482 "code": -1, 00:47:51.482 "message": "Operation not permitted" 00:47:51.482 } 00:47:51.482 14:43:55 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:47:51.482 14:43:55 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:47:51.482 14:43:55 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:47:51.482 14:43:55 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:47:51.482 14:43:55 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.BZ36uKppEv 00:47:51.482 14:43:55 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BZ36uKppEv 00:47:51.482 14:43:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BZ36uKppEv 00:47:51.741 14:43:55 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.BZ36uKppEv 00:47:51.741 14:43:55 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:47:51.741 14:43:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:51.741 14:43:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:51.741 14:43:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:51.741 14:43:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:51.741 14:43:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:51.742 14:43:55 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:47:51.742 14:43:55 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:51.742 14:43:55 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:47:51.742 14:43:55 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:51.742 14:43:55 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:47:51.742 14:43:55 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:51.742 14:43:55 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:47:51.742 14:43:55 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:51.742 14:43:55 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:51.742 14:43:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:52.001 [2024-10-13 14:43:55.546553] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.BZ36uKppEv': No such file or directory 00:47:52.001 [2024-10-13 14:43:55.546569] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:47:52.001 [2024-10-13 14:43:55.546582] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:47:52.001 [2024-10-13 14:43:55.546587] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:47:52.001 [2024-10-13 14:43:55.546593] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:47:52.001 [2024-10-13 14:43:55.546597] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:47:52.001 request: 00:47:52.001 { 00:47:52.001 "name": "nvme0", 00:47:52.001 "trtype": "tcp", 00:47:52.001 "traddr": "127.0.0.1", 00:47:52.001 "adrfam": "ipv4", 00:47:52.001 "trsvcid": "4420", 00:47:52.001 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:52.001 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:52.001 "prchk_reftag": false, 00:47:52.001 "prchk_guard": false, 00:47:52.001 "hdgst": false, 00:47:52.001 "ddgst": false, 00:47:52.001 "psk": "key0", 00:47:52.002 "allow_unrecognized_csi": false, 00:47:52.002 "method": "bdev_nvme_attach_controller", 00:47:52.002 "req_id": 1 00:47:52.002 } 00:47:52.002 Got JSON-RPC error response 00:47:52.002 response: 00:47:52.002 { 00:47:52.002 "code": -19, 00:47:52.002 "message": "No such device" 00:47:52.002 } 00:47:52.002 14:43:55 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:47:52.002 14:43:55 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:47:52.002 14:43:55 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:47:52.002 14:43:55 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:47:52.002 14:43:55 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:47:52.002 14:43:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:52.262 14:43:55 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:47:52.262 14:43:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:52.262 14:43:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:47:52.262 14:43:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:52.262 14:43:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:52.262 14:43:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:52.262 14:43:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.OzJuDw9MWV 00:47:52.262 14:43:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:52.262 14:43:55 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:52.262 14:43:55 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:47:52.262 14:43:55 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:47:52.262 14:43:55 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:47:52.262 14:43:55 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:47:52.262 14:43:55 keyring_file -- nvmf/common.sh@731 -- # python - 00:47:52.262 14:43:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.OzJuDw9MWV 00:47:52.262 14:43:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.OzJuDw9MWV 00:47:52.262 14:43:55 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.OzJuDw9MWV 00:47:52.262 14:43:55 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OzJuDw9MWV 00:47:52.262 14:43:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OzJuDw9MWV 00:47:52.262 14:43:55 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:52.262 14:43:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:52.523 nvme0n1 00:47:52.523 14:43:56 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:47:52.523 14:43:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:52.523 14:43:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:52.523 14:43:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:52.523 14:43:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:52.523 14:43:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:52.782 14:43:56 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:47:52.782 14:43:56 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:47:52.783 14:43:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:53.043 14:43:56 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:47:53.043 14:43:56 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:47:53.043 14:43:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:53.043 14:43:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:53.043 14:43:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:53.043 14:43:56 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:47:53.043 14:43:56 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:47:53.043 14:43:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:53.043 14:43:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:53.043 14:43:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:53.043 14:43:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:53.043 14:43:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:53.303 14:43:56 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:47:53.303 14:43:56 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:53.303 14:43:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:53.562 14:43:57 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:47:53.562 14:43:57 keyring_file -- keyring/file.sh@105 -- # jq length 00:47:53.562 14:43:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:53.822 14:43:57 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:47:53.822 14:43:57 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OzJuDw9MWV 00:47:53.822 14:43:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OzJuDw9MWV 00:47:53.822 14:43:57 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.tQ2hJVtebd 00:47:53.822 14:43:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.tQ2hJVtebd 00:47:54.081 14:43:57 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:54.081 14:43:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:54.341 nvme0n1 00:47:54.341 14:43:57 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:47:54.341 14:43:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:47:54.602 14:43:58 keyring_file -- keyring/file.sh@113 -- # config='{ 00:47:54.602 "subsystems": [ 00:47:54.602 { 00:47:54.602 "subsystem": "keyring", 00:47:54.602 "config": [ 00:47:54.602 { 00:47:54.602 "method": "keyring_file_add_key", 00:47:54.602 "params": { 00:47:54.602 "name": "key0", 00:47:54.602 "path": "/tmp/tmp.OzJuDw9MWV" 00:47:54.602 } 00:47:54.602 }, 00:47:54.602 { 00:47:54.602 "method": "keyring_file_add_key", 00:47:54.602 "params": { 00:47:54.602 "name": "key1", 00:47:54.602 "path": "/tmp/tmp.tQ2hJVtebd" 00:47:54.602 } 00:47:54.602 } 00:47:54.602 ] 00:47:54.602 }, 00:47:54.602 { 00:47:54.602 "subsystem": "iobuf", 00:47:54.602 "config": [ 00:47:54.602 { 00:47:54.602 "method": "iobuf_set_options", 00:47:54.602 "params": { 00:47:54.602 "small_pool_count": 8192, 00:47:54.602 "large_pool_count": 1024, 00:47:54.602 "small_bufsize": 8192, 00:47:54.602 "large_bufsize": 135168 00:47:54.602 } 00:47:54.602 } 00:47:54.602 ] 00:47:54.602 }, 00:47:54.602 { 00:47:54.602 "subsystem": "sock", 00:47:54.602 "config": [ 00:47:54.602 { 00:47:54.602 "method": "sock_set_default_impl", 00:47:54.602 "params": { 00:47:54.602 "impl_name": "posix" 00:47:54.602 } 00:47:54.602 }, 00:47:54.602 { 00:47:54.602 "method": "sock_impl_set_options", 00:47:54.602 "params": { 00:47:54.602 "impl_name": "ssl", 00:47:54.602 "recv_buf_size": 4096, 00:47:54.602 "send_buf_size": 4096, 00:47:54.602 "enable_recv_pipe": true, 00:47:54.602 "enable_quickack": false, 00:47:54.602 "enable_placement_id": 0, 00:47:54.602 "enable_zerocopy_send_server": true, 00:47:54.602 "enable_zerocopy_send_client": false, 00:47:54.602 "zerocopy_threshold": 0, 00:47:54.602 "tls_version": 0, 00:47:54.602 "enable_ktls": false 00:47:54.602 } 00:47:54.602 }, 00:47:54.602 { 00:47:54.602 "method": "sock_impl_set_options", 00:47:54.602 "params": { 00:47:54.602 "impl_name": "posix", 00:47:54.602 "recv_buf_size": 2097152, 00:47:54.602 "send_buf_size": 2097152, 00:47:54.602 "enable_recv_pipe": true, 00:47:54.602 "enable_quickack": false, 00:47:54.602 "enable_placement_id": 0, 00:47:54.602 "enable_zerocopy_send_server": true, 00:47:54.602 "enable_zerocopy_send_client": false, 00:47:54.602 "zerocopy_threshold": 0, 00:47:54.602 "tls_version": 0, 00:47:54.602 "enable_ktls": false 00:47:54.602 } 00:47:54.602 } 00:47:54.602 ] 00:47:54.602 }, 00:47:54.602 { 00:47:54.602 "subsystem": "vmd", 00:47:54.602 "config": [] 00:47:54.602 }, 00:47:54.602 { 00:47:54.602 "subsystem": "accel", 00:47:54.602 "config": [ 00:47:54.602 { 00:47:54.602 "method": "accel_set_options", 00:47:54.602 "params": { 00:47:54.602 "small_cache_size": 128, 00:47:54.602 "large_cache_size": 16, 00:47:54.602 "task_count": 2048, 00:47:54.602 "sequence_count": 2048, 00:47:54.602 "buf_count": 2048 00:47:54.602 } 00:47:54.602 } 00:47:54.602 ] 00:47:54.602 }, 00:47:54.602 { 00:47:54.602 "subsystem": "bdev", 00:47:54.602 "config": [ 00:47:54.602 { 00:47:54.602 "method": "bdev_set_options", 00:47:54.602 "params": { 00:47:54.602 "bdev_io_pool_size": 65535, 00:47:54.602 "bdev_io_cache_size": 256, 00:47:54.602 "bdev_auto_examine": true, 00:47:54.602 "iobuf_small_cache_size": 128, 00:47:54.602 "iobuf_large_cache_size": 16 00:47:54.602 } 00:47:54.602 }, 00:47:54.602 { 00:47:54.602 "method": "bdev_raid_set_options", 00:47:54.602 "params": { 00:47:54.602 "process_window_size_kb": 1024, 00:47:54.602 "process_max_bandwidth_mb_sec": 0 00:47:54.602 } 00:47:54.602 }, 00:47:54.602 { 00:47:54.603 "method": "bdev_iscsi_set_options", 00:47:54.603 "params": { 00:47:54.603 "timeout_sec": 30 00:47:54.603 } 00:47:54.603 }, 00:47:54.603 { 00:47:54.603 "method": "bdev_nvme_set_options", 00:47:54.603 "params": { 00:47:54.603 "action_on_timeout": "none", 00:47:54.603 "timeout_us": 0, 00:47:54.603 "timeout_admin_us": 0, 00:47:54.603 "keep_alive_timeout_ms": 10000, 00:47:54.603 "arbitration_burst": 0, 00:47:54.603 "low_priority_weight": 0, 00:47:54.603 "medium_priority_weight": 0, 00:47:54.603 "high_priority_weight": 0, 00:47:54.603 "nvme_adminq_poll_period_us": 10000, 00:47:54.603 "nvme_ioq_poll_period_us": 0, 00:47:54.603 "io_queue_requests": 512, 00:47:54.603 "delay_cmd_submit": true, 00:47:54.603 "transport_retry_count": 4, 00:47:54.603 "bdev_retry_count": 3, 00:47:54.603 "transport_ack_timeout": 0, 00:47:54.603 "ctrlr_loss_timeout_sec": 0, 00:47:54.603 "reconnect_delay_sec": 0, 00:47:54.603 "fast_io_fail_timeout_sec": 0, 00:47:54.603 "disable_auto_failback": false, 00:47:54.603 "generate_uuids": false, 00:47:54.603 "transport_tos": 0, 00:47:54.603 "nvme_error_stat": false, 00:47:54.603 "rdma_srq_size": 0, 00:47:54.603 "io_path_stat": false, 00:47:54.603 "allow_accel_sequence": false, 00:47:54.603 "rdma_max_cq_size": 0, 00:47:54.603 "rdma_cm_event_timeout_ms": 0, 00:47:54.603 "dhchap_digests": [ 00:47:54.603 "sha256", 00:47:54.603 "sha384", 00:47:54.603 "sha512" 00:47:54.603 ], 00:47:54.603 "dhchap_dhgroups": [ 00:47:54.603 "null", 00:47:54.603 "ffdhe2048", 00:47:54.603 "ffdhe3072", 00:47:54.603 "ffdhe4096", 00:47:54.603 "ffdhe6144", 00:47:54.603 "ffdhe8192" 00:47:54.603 ] 00:47:54.603 } 00:47:54.603 }, 00:47:54.603 { 00:47:54.603 "method": "bdev_nvme_attach_controller", 00:47:54.603 "params": { 00:47:54.603 "name": "nvme0", 00:47:54.603 "trtype": "TCP", 00:47:54.603 "adrfam": "IPv4", 00:47:54.603 "traddr": "127.0.0.1", 00:47:54.603 "trsvcid": "4420", 00:47:54.603 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:54.603 "prchk_reftag": false, 00:47:54.603 "prchk_guard": false, 00:47:54.603 "ctrlr_loss_timeout_sec": 0, 00:47:54.603 "reconnect_delay_sec": 0, 00:47:54.603 "fast_io_fail_timeout_sec": 0, 00:47:54.603 "psk": "key0", 00:47:54.603 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:54.603 "hdgst": false, 00:47:54.603 "ddgst": false, 00:47:54.603 "multipath": "multipath" 00:47:54.603 } 00:47:54.603 }, 00:47:54.603 { 00:47:54.603 "method": "bdev_nvme_set_hotplug", 00:47:54.603 "params": { 00:47:54.603 "period_us": 100000, 00:47:54.603 "enable": false 00:47:54.603 } 00:47:54.603 }, 00:47:54.603 { 00:47:54.603 "method": "bdev_wait_for_examine" 00:47:54.603 } 00:47:54.603 ] 00:47:54.603 }, 00:47:54.603 { 00:47:54.603 "subsystem": "nbd", 00:47:54.603 "config": [] 00:47:54.603 } 00:47:54.603 ] 00:47:54.603 }' 00:47:54.603 14:43:58 keyring_file -- keyring/file.sh@115 -- # killprocess 2133834 00:47:54.603 14:43:58 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2133834 ']' 00:47:54.603 14:43:58 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2133834 00:47:54.603 14:43:58 keyring_file -- common/autotest_common.sh@955 -- # uname 00:47:54.603 14:43:58 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:54.603 14:43:58 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2133834 00:47:54.603 14:43:58 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:47:54.603 14:43:58 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:47:54.603 14:43:58 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2133834' 00:47:54.603 killing process with pid 2133834 00:47:54.603 14:43:58 keyring_file -- common/autotest_common.sh@969 -- # kill 2133834 00:47:54.603 Received shutdown signal, test time was about 1.000000 seconds 00:47:54.603 00:47:54.603 Latency(us) 00:47:54.603 [2024-10-13T12:43:58.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:54.603 [2024-10-13T12:43:58.310Z] =================================================================================================================== 00:47:54.603 [2024-10-13T12:43:58.310Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:54.603 14:43:58 keyring_file -- common/autotest_common.sh@974 -- # wait 2133834 00:47:54.603 14:43:58 keyring_file -- keyring/file.sh@118 -- # bperfpid=2135655 00:47:54.603 14:43:58 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2135655 /var/tmp/bperf.sock 00:47:54.603 14:43:58 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2135655 ']' 00:47:54.603 14:43:58 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:54.603 14:43:58 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:47:54.603 14:43:58 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:54.603 14:43:58 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:54.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:54.603 14:43:58 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:54.603 14:43:58 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:47:54.603 "subsystems": [ 00:47:54.603 { 00:47:54.603 "subsystem": "keyring", 00:47:54.603 "config": [ 00:47:54.603 { 00:47:54.603 "method": "keyring_file_add_key", 00:47:54.603 "params": { 00:47:54.603 "name": "key0", 00:47:54.603 "path": "/tmp/tmp.OzJuDw9MWV" 00:47:54.603 } 00:47:54.603 }, 00:47:54.603 { 00:47:54.603 "method": "keyring_file_add_key", 00:47:54.603 "params": { 00:47:54.603 "name": "key1", 00:47:54.603 "path": "/tmp/tmp.tQ2hJVtebd" 00:47:54.603 } 00:47:54.603 } 00:47:54.603 ] 00:47:54.603 }, 00:47:54.603 { 00:47:54.603 "subsystem": "iobuf", 00:47:54.603 "config": [ 00:47:54.603 { 00:47:54.603 "method": "iobuf_set_options", 00:47:54.603 "params": { 00:47:54.603 "small_pool_count": 8192, 00:47:54.603 "large_pool_count": 1024, 00:47:54.603 "small_bufsize": 8192, 00:47:54.603 "large_bufsize": 135168 00:47:54.603 } 00:47:54.603 } 00:47:54.603 ] 00:47:54.603 }, 00:47:54.603 { 00:47:54.603 "subsystem": "sock", 00:47:54.603 "config": [ 00:47:54.603 { 00:47:54.603 "method": "sock_set_default_impl", 00:47:54.603 "params": { 00:47:54.603 "impl_name": "posix" 00:47:54.603 } 00:47:54.603 }, 00:47:54.603 { 00:47:54.603 "method": "sock_impl_set_options", 00:47:54.603 "params": { 00:47:54.603 "impl_name": "ssl", 00:47:54.603 "recv_buf_size": 4096, 00:47:54.603 "send_buf_size": 4096, 00:47:54.603 "enable_recv_pipe": true, 00:47:54.603 "enable_quickack": false, 00:47:54.603 "enable_placement_id": 0, 00:47:54.603 "enable_zerocopy_send_server": true, 00:47:54.603 "enable_zerocopy_send_client": false, 00:47:54.603 "zerocopy_threshold": 0, 00:47:54.603 "tls_version": 0, 00:47:54.603 "enable_ktls": false 00:47:54.603 } 00:47:54.603 }, 00:47:54.603 { 00:47:54.603 "method": "sock_impl_set_options", 00:47:54.603 "params": { 00:47:54.603 "impl_name": "posix", 00:47:54.603 "recv_buf_size": 2097152, 00:47:54.603 "send_buf_size": 2097152, 00:47:54.603 "enable_recv_pipe": true, 00:47:54.603 "enable_quickack": false, 00:47:54.603 "enable_placement_id": 0, 00:47:54.603 "enable_zerocopy_send_server": true, 00:47:54.604 "enable_zerocopy_send_client": false, 00:47:54.604 "zerocopy_threshold": 0, 00:47:54.604 "tls_version": 0, 00:47:54.604 "enable_ktls": false 00:47:54.604 } 00:47:54.604 } 00:47:54.604 ] 00:47:54.604 }, 00:47:54.604 { 00:47:54.604 "subsystem": "vmd", 00:47:54.604 "config": [] 00:47:54.604 }, 00:47:54.604 { 00:47:54.604 "subsystem": "accel", 00:47:54.604 "config": [ 00:47:54.604 { 00:47:54.604 "method": "accel_set_options", 00:47:54.604 "params": { 00:47:54.604 "small_cache_size": 128, 00:47:54.604 "large_cache_size": 16, 00:47:54.604 "task_count": 2048, 00:47:54.604 "sequence_count": 2048, 00:47:54.604 "buf_count": 2048 00:47:54.604 } 00:47:54.604 } 00:47:54.604 ] 00:47:54.604 }, 00:47:54.604 { 00:47:54.604 "subsystem": "bdev", 00:47:54.604 "config": [ 00:47:54.604 { 00:47:54.604 "method": "bdev_set_options", 00:47:54.604 "params": { 00:47:54.604 "bdev_io_pool_size": 65535, 00:47:54.604 "bdev_io_cache_size": 256, 00:47:54.604 "bdev_auto_examine": true, 00:47:54.604 "iobuf_small_cache_size": 128, 00:47:54.604 "iobuf_large_cache_size": 16 00:47:54.604 } 00:47:54.604 }, 00:47:54.604 { 00:47:54.604 "method": "bdev_raid_set_options", 00:47:54.604 "params": { 00:47:54.604 "process_window_size_kb": 1024, 00:47:54.604 "process_max_bandwidth_mb_sec": 0 00:47:54.604 } 00:47:54.604 }, 00:47:54.604 { 00:47:54.604 "method": "bdev_iscsi_set_options", 00:47:54.604 "params": { 00:47:54.604 "timeout_sec": 30 00:47:54.604 } 00:47:54.604 }, 00:47:54.604 { 00:47:54.604 "method": "bdev_nvme_set_options", 00:47:54.604 "params": { 00:47:54.604 "action_on_timeout": "none", 00:47:54.604 "timeout_us": 0, 00:47:54.604 "timeout_admin_us": 0, 00:47:54.604 "keep_alive_timeout_ms": 10000, 00:47:54.604 "arbitration_burst": 0, 00:47:54.604 "low_priority_weight": 0, 00:47:54.604 "medium_priority_weight": 0, 00:47:54.604 "high_priority_weight": 0, 00:47:54.604 "nvme_adminq_poll_period_us": 10000, 00:47:54.604 "nvme_ioq_poll_period_us": 0, 00:47:54.604 "io_queue_requests": 512, 00:47:54.604 "delay_cmd_submit": true, 00:47:54.604 "transport_retry_count": 4, 00:47:54.604 "bdev_retry_count": 3, 00:47:54.604 "transport_ack_timeout": 0, 00:47:54.604 "ctrlr_loss_timeout_sec": 0, 00:47:54.604 "reconnect_delay_sec": 0, 00:47:54.604 "fast_io_fail_timeout_sec": 0, 00:47:54.604 "disable_auto_failback": false, 00:47:54.604 "generate_uuids": false, 00:47:54.604 "transport_tos": 0, 00:47:54.604 "nvme_error_stat": false, 00:47:54.604 "rdma_srq_size": 0, 00:47:54.604 "io_path_stat": false, 00:47:54.604 "allow_accel_sequence": false, 00:47:54.604 "rdma_max_cq_size": 0, 00:47:54.604 "rdma_cm_event_timeout_ms": 0, 00:47:54.604 "dhchap_digests": [ 00:47:54.604 "sha256", 00:47:54.604 "sha384", 00:47:54.604 "sha512" 00:47:54.604 ], 00:47:54.604 "dhchap_dhgroups": [ 00:47:54.604 "null", 00:47:54.604 "ffdhe2048", 00:47:54.604 "ffdhe3072", 00:47:54.604 "ffdhe4096", 00:47:54.604 "ffdhe6144", 00:47:54.604 "ffdhe8192" 00:47:54.604 ] 00:47:54.604 } 00:47:54.604 }, 00:47:54.604 { 00:47:54.604 "method": "bdev_nvme_attach_controller", 00:47:54.604 "params": { 00:47:54.604 "name": "nvme0", 00:47:54.604 "trtype": "TCP", 00:47:54.604 "adrfam": "IPv4", 00:47:54.604 "traddr": "127.0.0.1", 00:47:54.604 "trsvcid": "4420", 00:47:54.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:54.604 "prchk_reftag": false, 00:47:54.604 "prchk_guard": false, 00:47:54.604 "ctrlr_loss_timeout_sec": 0, 00:47:54.604 "reconnect_delay_sec": 0, 00:47:54.604 "fast_io_fail_timeout_sec": 0, 00:47:54.604 "psk": "key0", 00:47:54.604 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:54.604 "hdgst": false, 00:47:54.604 "ddgst": false, 00:47:54.604 "multipath": "multipath" 00:47:54.604 } 00:47:54.604 }, 00:47:54.604 { 00:47:54.604 "method": "bdev_nvme_set_hotplug", 00:47:54.604 "params": { 00:47:54.604 "period_us": 100000, 00:47:54.604 "enable": false 00:47:54.604 } 00:47:54.604 }, 00:47:54.604 { 00:47:54.604 "method": "bdev_wait_for_examine" 00:47:54.604 } 00:47:54.604 ] 00:47:54.604 }, 00:47:54.604 { 00:47:54.604 "subsystem": "nbd", 00:47:54.604 "config": [] 00:47:54.604 } 00:47:54.604 ] 00:47:54.604 }' 00:47:54.604 14:43:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:54.604 [2024-10-13 14:43:58.284493] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:47:54.604 [2024-10-13 14:43:58.284547] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2135655 ] 00:47:54.864 [2024-10-13 14:43:58.414549] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:54.864 [2024-10-13 14:43:58.462430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:54.864 [2024-10-13 14:43:58.478404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:55.123 [2024-10-13 14:43:58.615521] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:55.383 14:43:59 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:55.383 14:43:59 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:47:55.383 14:43:59 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:47:55.383 14:43:59 keyring_file -- keyring/file.sh@121 -- # jq length 00:47:55.383 14:43:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:55.644 14:43:59 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:47:55.644 14:43:59 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:47:55.644 14:43:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:55.644 14:43:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:55.644 14:43:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:55.644 14:43:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:55.644 14:43:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:55.904 14:43:59 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:47:55.904 14:43:59 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:47:55.904 14:43:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:55.904 14:43:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:55.904 14:43:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:55.904 14:43:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:55.904 14:43:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:56.164 14:43:59 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:47:56.164 14:43:59 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:47:56.164 14:43:59 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:47:56.164 14:43:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:47:56.164 14:43:59 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:47:56.164 14:43:59 keyring_file -- keyring/file.sh@1 -- # cleanup 00:47:56.164 14:43:59 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.OzJuDw9MWV /tmp/tmp.tQ2hJVtebd 00:47:56.164 14:43:59 keyring_file -- keyring/file.sh@20 -- # killprocess 2135655 00:47:56.164 14:43:59 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2135655 ']' 00:47:56.164 14:43:59 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2135655 00:47:56.164 14:43:59 keyring_file -- common/autotest_common.sh@955 -- # uname 00:47:56.164 14:43:59 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:56.164 14:43:59 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2135655 00:47:56.164 14:43:59 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:47:56.164 14:43:59 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:47:56.164 14:43:59 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2135655' 00:47:56.164 killing process with pid 2135655 00:47:56.164 14:43:59 keyring_file -- common/autotest_common.sh@969 -- # kill 2135655 00:47:56.164 Received shutdown signal, test time was about 1.000000 seconds 00:47:56.164 00:47:56.164 Latency(us) 00:47:56.164 [2024-10-13T12:43:59.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:56.164 [2024-10-13T12:43:59.871Z] =================================================================================================================== 00:47:56.164 [2024-10-13T12:43:59.871Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:47:56.164 14:43:59 keyring_file -- common/autotest_common.sh@974 -- # wait 2135655 00:47:56.424 14:43:59 keyring_file -- keyring/file.sh@21 -- # killprocess 2133742 00:47:56.424 14:43:59 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2133742 ']' 00:47:56.424 14:43:59 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2133742 00:47:56.424 14:43:59 keyring_file -- common/autotest_common.sh@955 -- # uname 00:47:56.424 14:43:59 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:56.424 14:43:59 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2133742 00:47:56.424 14:44:00 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:47:56.424 14:44:00 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:47:56.424 14:44:00 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2133742' 00:47:56.424 killing process with pid 2133742 00:47:56.424 14:44:00 keyring_file -- common/autotest_common.sh@969 -- # kill 2133742 00:47:56.424 14:44:00 keyring_file -- common/autotest_common.sh@974 -- # wait 2133742 00:47:56.684 00:47:56.684 real 0m11.842s 00:47:56.684 user 0m28.378s 00:47:56.684 sys 0m2.644s 00:47:56.684 14:44:00 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:56.684 14:44:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:56.684 ************************************ 00:47:56.684 END TEST keyring_file 00:47:56.684 ************************************ 00:47:56.684 14:44:00 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:47:56.684 14:44:00 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:47:56.684 14:44:00 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:47:56.684 14:44:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:56.684 14:44:00 -- common/autotest_common.sh@10 -- # set +x 00:47:56.684 ************************************ 00:47:56.684 START TEST keyring_linux 00:47:56.684 ************************************ 00:47:56.684 14:44:00 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:47:56.684 Joined session keyring: 955109792 00:47:56.684 * Looking for test storage... 00:47:56.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:47:56.684 14:44:00 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:47:56.945 14:44:00 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:47:56.945 14:44:00 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:47:56.945 14:44:00 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@345 -- # : 1 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:56.945 14:44:00 keyring_linux -- scripts/common.sh@368 -- # return 0 00:47:56.945 14:44:00 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:56.945 14:44:00 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:47:56.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:56.945 --rc genhtml_branch_coverage=1 00:47:56.945 --rc genhtml_function_coverage=1 00:47:56.945 --rc genhtml_legend=1 00:47:56.945 --rc geninfo_all_blocks=1 00:47:56.945 --rc geninfo_unexecuted_blocks=1 00:47:56.945 00:47:56.945 ' 00:47:56.946 14:44:00 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:47:56.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:56.946 --rc genhtml_branch_coverage=1 00:47:56.946 --rc genhtml_function_coverage=1 00:47:56.946 --rc genhtml_legend=1 00:47:56.946 --rc geninfo_all_blocks=1 00:47:56.946 --rc geninfo_unexecuted_blocks=1 00:47:56.946 00:47:56.946 ' 00:47:56.946 14:44:00 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:47:56.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:56.946 --rc genhtml_branch_coverage=1 00:47:56.946 --rc genhtml_function_coverage=1 00:47:56.946 --rc genhtml_legend=1 00:47:56.946 --rc geninfo_all_blocks=1 00:47:56.946 --rc geninfo_unexecuted_blocks=1 00:47:56.946 00:47:56.946 ' 00:47:56.946 14:44:00 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:47:56.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:56.946 --rc genhtml_branch_coverage=1 00:47:56.946 --rc genhtml_function_coverage=1 00:47:56.946 --rc genhtml_legend=1 00:47:56.946 --rc geninfo_all_blocks=1 00:47:56.946 --rc geninfo_unexecuted_blocks=1 00:47:56.946 00:47:56.946 ' 00:47:56.946 14:44:00 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:47:56.946 14:44:00 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:56.946 14:44:00 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:47:56.946 14:44:00 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:56.946 14:44:00 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:56.946 14:44:00 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:56.946 14:44:00 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:56.946 14:44:00 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:56.946 14:44:00 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:56.946 14:44:00 keyring_linux -- paths/export.sh@5 -- # export PATH 00:47:56.946 14:44:00 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:56.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:56.946 14:44:00 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:47:56.946 14:44:00 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:47:56.946 14:44:00 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:47:56.946 14:44:00 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:47:56.946 14:44:00 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:47:56.946 14:44:00 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:47:56.946 14:44:00 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:47:56.946 14:44:00 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:47:56.946 14:44:00 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:47:56.946 14:44:00 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:56.946 14:44:00 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:47:56.946 14:44:00 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:47:56.946 14:44:00 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@731 -- # python - 00:47:56.946 14:44:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:47:56.946 14:44:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:47:56.946 /tmp/:spdk-test:key0 00:47:56.946 14:44:00 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:47:56.946 14:44:00 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:47:56.946 14:44:00 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:47:56.946 14:44:00 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:47:56.946 14:44:00 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:47:56.946 14:44:00 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:47:56.946 14:44:00 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:47:56.946 14:44:00 keyring_linux -- nvmf/common.sh@731 -- # python - 00:47:56.946 14:44:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:47:56.946 14:44:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:47:56.946 /tmp/:spdk-test:key1 00:47:56.946 14:44:00 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2136094 00:47:56.946 14:44:00 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2136094 00:47:56.946 14:44:00 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:47:56.946 14:44:00 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2136094 ']' 00:47:56.946 14:44:00 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:56.946 14:44:00 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:56.946 14:44:00 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:56.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:56.946 14:44:00 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:56.946 14:44:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:57.207 [2024-10-13 14:44:00.663056] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:47:57.207 [2024-10-13 14:44:00.663115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2136094 ] 00:47:57.207 [2024-10-13 14:44:00.793360] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:57.207 [2024-10-13 14:44:00.841259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:57.207 [2024-10-13 14:44:00.857687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:57.777 14:44:01 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:57.777 14:44:01 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:47:57.777 14:44:01 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:47:57.777 14:44:01 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:57.777 14:44:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:57.777 [2024-10-13 14:44:01.448879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:57.777 null0 00:47:57.777 [2024-10-13 14:44:01.480864] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:57.777 [2024-10-13 14:44:01.481212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:58.037 14:44:01 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:58.037 14:44:01 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:47:58.037 455223362 00:47:58.037 14:44:01 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:47:58.037 65503835 00:47:58.037 14:44:01 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2136321 00:47:58.037 14:44:01 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2136321 /var/tmp/bperf.sock 00:47:58.037 14:44:01 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:47:58.037 14:44:01 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2136321 ']' 00:47:58.037 14:44:01 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:58.037 14:44:01 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:58.037 14:44:01 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:58.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:58.037 14:44:01 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:58.037 14:44:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:58.037 [2024-10-13 14:44:01.556825] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.11.0-rc0 initialization... 00:47:58.037 [2024-10-13 14:44:01.556874] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2136321 ] 00:47:58.037 [2024-10-13 14:44:01.686870] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:47:58.037 [2024-10-13 14:44:01.732658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:58.298 [2024-10-13 14:44:01.749147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:58.867 14:44:02 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:58.868 14:44:02 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:47:58.868 14:44:02 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:47:58.868 14:44:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:47:58.868 14:44:02 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:47:58.868 14:44:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:47:59.127 14:44:02 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:59.127 14:44:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:59.386 [2024-10-13 14:44:02.897198] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:59.386 nvme0n1 00:47:59.386 14:44:02 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:47:59.386 14:44:02 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:47:59.386 14:44:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:59.386 14:44:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:59.386 14:44:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:59.386 14:44:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:59.646 14:44:03 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:47:59.646 14:44:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:59.646 14:44:03 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:47:59.646 14:44:03 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:47:59.646 14:44:03 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:59.646 14:44:03 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:47:59.646 14:44:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:59.646 14:44:03 keyring_linux -- keyring/linux.sh@25 -- # sn=455223362 00:47:59.646 14:44:03 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:47:59.646 14:44:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:59.646 14:44:03 keyring_linux -- keyring/linux.sh@26 -- # [[ 455223362 == \4\5\5\2\2\3\3\6\2 ]] 00:47:59.646 14:44:03 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 455223362 00:47:59.646 14:44:03 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:47:59.646 14:44:03 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:59.905 Running I/O for 1 seconds... 00:48:00.844 24293.00 IOPS, 94.89 MiB/s 00:48:00.844 Latency(us) 00:48:00.844 [2024-10-13T12:44:04.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:00.844 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:48:00.844 nvme0n1 : 1.01 24295.08 94.90 0.00 0.00 5253.28 4351.91 14670.60 00:48:00.844 [2024-10-13T12:44:04.551Z] =================================================================================================================== 00:48:00.844 [2024-10-13T12:44:04.551Z] Total : 24295.08 94.90 0.00 0.00 5253.28 4351.91 14670.60 00:48:00.844 { 00:48:00.844 "results": [ 00:48:00.844 { 00:48:00.844 "job": "nvme0n1", 00:48:00.844 "core_mask": "0x2", 00:48:00.844 "workload": "randread", 00:48:00.844 "status": "finished", 00:48:00.844 "queue_depth": 128, 00:48:00.844 "io_size": 4096, 00:48:00.844 "runtime": 1.005183, 00:48:00.844 "iops": 24295.07860757693, 00:48:00.844 "mibps": 94.90265081084738, 00:48:00.844 "io_failed": 0, 00:48:00.844 "io_timeout": 0, 00:48:00.844 "avg_latency_us": 5253.275341438282, 00:48:00.844 "min_latency_us": 4351.914467089877, 00:48:00.844 "max_latency_us": 14670.604744403608 00:48:00.844 } 00:48:00.844 ], 00:48:00.844 "core_count": 1 00:48:00.844 } 00:48:00.845 14:44:04 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:48:00.845 14:44:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:48:01.105 14:44:04 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:48:01.105 14:44:04 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:48:01.105 14:44:04 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:48:01.105 14:44:04 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:48:01.105 14:44:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:48:01.105 14:44:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:01.367 14:44:04 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:48:01.367 14:44:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:48:01.367 14:44:04 keyring_linux -- keyring/linux.sh@23 -- # return 00:48:01.367 14:44:04 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:01.367 14:44:04 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:48:01.367 14:44:04 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:01.367 14:44:04 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:48:01.367 14:44:04 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:01.367 14:44:04 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:48:01.367 14:44:04 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:01.367 14:44:04 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:01.367 14:44:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:01.367 [2024-10-13 14:44:05.011757] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:48:01.367 [2024-10-13 14:44:05.011956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7be60 (107): Transport endpoint is not connected 00:48:01.367 [2024-10-13 14:44:05.012950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7be60 (9): Bad file descriptor 00:48:01.367 [2024-10-13 14:44:05.013949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:48:01.367 [2024-10-13 14:44:05.013958] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:48:01.367 [2024-10-13 14:44:05.013964] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:48:01.367 [2024-10-13 14:44:05.013970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:48:01.367 request: 00:48:01.367 { 00:48:01.367 "name": "nvme0", 00:48:01.367 "trtype": "tcp", 00:48:01.367 "traddr": "127.0.0.1", 00:48:01.367 "adrfam": "ipv4", 00:48:01.367 "trsvcid": "4420", 00:48:01.367 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:01.367 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:01.367 "prchk_reftag": false, 00:48:01.367 "prchk_guard": false, 00:48:01.367 "hdgst": false, 00:48:01.367 "ddgst": false, 00:48:01.367 "psk": ":spdk-test:key1", 00:48:01.367 "allow_unrecognized_csi": false, 00:48:01.367 "method": "bdev_nvme_attach_controller", 00:48:01.367 "req_id": 1 00:48:01.367 } 00:48:01.367 Got JSON-RPC error response 00:48:01.367 response: 00:48:01.367 { 00:48:01.367 "code": -5, 00:48:01.367 "message": "Input/output error" 00:48:01.367 } 00:48:01.367 14:44:05 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:48:01.367 14:44:05 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:48:01.367 14:44:05 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:48:01.367 14:44:05 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:48:01.367 14:44:05 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:48:01.367 14:44:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:48:01.367 14:44:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:48:01.367 14:44:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:48:01.367 14:44:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:48:01.367 14:44:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:48:01.367 14:44:05 keyring_linux -- keyring/linux.sh@33 -- # sn=455223362 00:48:01.367 14:44:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 455223362 00:48:01.367 1 links removed 00:48:01.367 14:44:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:48:01.367 14:44:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:48:01.367 14:44:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:48:01.367 14:44:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:48:01.367 14:44:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:48:01.367 14:44:05 keyring_linux -- keyring/linux.sh@33 -- # sn=65503835 00:48:01.367 14:44:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 65503835 00:48:01.367 1 links removed 00:48:01.367 14:44:05 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2136321 00:48:01.367 14:44:05 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2136321 ']' 00:48:01.367 14:44:05 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2136321 00:48:01.367 14:44:05 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:48:01.367 14:44:05 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:48:01.367 14:44:05 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2136321 00:48:01.627 14:44:05 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:48:01.627 14:44:05 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:48:01.627 14:44:05 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2136321' 00:48:01.627 killing process with pid 2136321 00:48:01.627 14:44:05 keyring_linux -- common/autotest_common.sh@969 -- # kill 2136321 00:48:01.627 Received shutdown signal, test time was about 1.000000 seconds 00:48:01.627 00:48:01.627 Latency(us) 00:48:01.627 [2024-10-13T12:44:05.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:01.627 [2024-10-13T12:44:05.334Z] =================================================================================================================== 00:48:01.627 [2024-10-13T12:44:05.334Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:01.627 14:44:05 keyring_linux -- common/autotest_common.sh@974 -- # wait 2136321 00:48:01.627 14:44:05 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2136094 00:48:01.627 14:44:05 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2136094 ']' 00:48:01.627 14:44:05 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2136094 00:48:01.627 14:44:05 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:48:01.627 14:44:05 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:48:01.627 14:44:05 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2136094 00:48:01.627 14:44:05 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:48:01.627 14:44:05 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:48:01.627 14:44:05 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2136094' 00:48:01.627 killing process with pid 2136094 00:48:01.627 14:44:05 keyring_linux -- common/autotest_common.sh@969 -- # kill 2136094 00:48:01.627 14:44:05 keyring_linux -- common/autotest_common.sh@974 -- # wait 2136094 00:48:01.887 00:48:01.887 real 0m5.170s 00:48:01.887 user 0m9.450s 00:48:01.887 sys 0m1.423s 00:48:01.887 14:44:05 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:48:01.887 14:44:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:01.887 ************************************ 00:48:01.887 END TEST keyring_linux 00:48:01.887 ************************************ 00:48:01.887 14:44:05 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:48:01.887 14:44:05 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:48:01.887 14:44:05 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:48:01.887 14:44:05 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:48:01.887 14:44:05 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:48:01.887 14:44:05 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:48:01.887 14:44:05 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:48:01.887 14:44:05 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:48:01.887 14:44:05 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:48:01.887 14:44:05 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:48:01.887 14:44:05 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:48:01.887 14:44:05 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:48:01.887 14:44:05 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:48:01.887 14:44:05 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:48:01.887 14:44:05 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:48:01.887 14:44:05 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:48:01.887 14:44:05 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:48:01.887 14:44:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:48:01.887 14:44:05 -- common/autotest_common.sh@10 -- # set +x 00:48:01.887 14:44:05 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:48:01.887 14:44:05 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:48:01.887 14:44:05 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:48:01.887 14:44:05 -- common/autotest_common.sh@10 -- # set +x 00:48:10.014 INFO: APP EXITING 00:48:10.014 INFO: killing all VMs 00:48:10.014 INFO: killing vhost app 00:48:10.014 INFO: EXIT DONE 00:48:13.311 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:48:13.311 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:48:13.311 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:48:13.311 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:48:13.311 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:48:13.311 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:48:13.311 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:48:13.311 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:48:13.311 0000:65:00.0 (144d a80a): Already using the nvme driver 00:48:13.311 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:48:13.311 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:48:13.311 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:48:13.311 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:48:13.311 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:48:13.311 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:48:13.311 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:48:13.311 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:48:17.511 Cleaning 00:48:17.511 Removing: /var/run/dpdk/spdk0/config 00:48:17.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:48:17.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:48:17.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:48:17.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:48:17.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:48:17.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:48:17.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:48:17.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:48:17.511 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:48:17.511 Removing: /var/run/dpdk/spdk0/hugepage_info 00:48:17.511 Removing: /var/run/dpdk/spdk1/config 00:48:17.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:48:17.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:48:17.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:48:17.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:48:17.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:48:17.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:48:17.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:48:17.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:48:17.511 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:48:17.511 Removing: /var/run/dpdk/spdk1/hugepage_info 00:48:17.511 Removing: /var/run/dpdk/spdk2/config 00:48:17.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:48:17.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:48:17.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:48:17.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:48:17.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:48:17.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:48:17.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:48:17.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:48:17.511 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:48:17.511 Removing: /var/run/dpdk/spdk2/hugepage_info 00:48:17.511 Removing: /var/run/dpdk/spdk3/config 00:48:17.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:48:17.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:48:17.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:48:17.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:48:17.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:48:17.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:48:17.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:48:17.512 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:48:17.512 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:48:17.512 Removing: /var/run/dpdk/spdk3/hugepage_info 00:48:17.512 Removing: /var/run/dpdk/spdk4/config 00:48:17.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:48:17.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:48:17.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:48:17.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:48:17.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:48:17.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:48:17.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:48:17.512 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:48:17.512 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:48:17.512 Removing: /var/run/dpdk/spdk4/hugepage_info 00:48:17.512 Removing: /dev/shm/bdev_svc_trace.1 00:48:17.512 Removing: /dev/shm/nvmf_trace.0 00:48:17.512 Removing: /dev/shm/spdk_tgt_trace.pid1455598 00:48:17.512 Removing: /var/run/dpdk/spdk0 00:48:17.512 Removing: /var/run/dpdk/spdk1 00:48:17.512 Removing: /var/run/dpdk/spdk2 00:48:17.512 Removing: /var/run/dpdk/spdk3 00:48:17.512 Removing: /var/run/dpdk/spdk4 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1453966 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1455598 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1456311 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1457344 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1457696 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1458762 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1459097 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1459332 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1460379 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1461157 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1461548 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1461948 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1462361 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1462698 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1462874 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1463155 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1463544 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1464717 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1468237 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1468595 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1468961 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1469288 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1469671 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1469878 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1470381 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1470405 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1470762 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1471069 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1471135 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1471471 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1471915 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1472272 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1472674 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1477267 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1482815 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1495419 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1496213 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1501568 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1501921 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1507359 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1514498 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1517623 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1530288 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1542007 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1544022 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1545065 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1566442 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1571400 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1671554 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1678148 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1685831 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1693115 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1693159 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1694222 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1695268 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1696335 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1696977 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1697132 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1697382 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1697479 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1697481 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1698481 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1699492 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1700514 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1701166 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1701293 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1701574 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1702946 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1704342 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1714212 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1748766 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1754351 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1756296 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1758371 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1758720 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1759058 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1759321 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1760119 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1762310 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1764103 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1764634 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1767199 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1767963 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1768908 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1773794 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1780527 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1780529 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1780531 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1785312 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1790190 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1795998 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1840417 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1845138 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1852687 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1854508 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1856229 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1858014 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1863842 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1868674 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1878133 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1878176 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1883297 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1883623 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1883960 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1884304 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1884445 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1885808 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1887711 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1889666 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1891662 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1893659 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1895565 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1903528 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1904170 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1905359 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1906562 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1913090 00:48:17.512 Removing: /var/run/dpdk/spdk_pid1916132 00:48:17.773 Removing: /var/run/dpdk/spdk_pid1922753 00:48:17.773 Removing: /var/run/dpdk/spdk_pid1929527 00:48:17.773 Removing: /var/run/dpdk/spdk_pid1939667 00:48:17.773 Removing: /var/run/dpdk/spdk_pid1948309 00:48:17.773 Removing: /var/run/dpdk/spdk_pid1948389 00:48:17.773 Removing: /var/run/dpdk/spdk_pid1971944 00:48:17.773 Removing: /var/run/dpdk/spdk_pid1972731 00:48:17.773 Removing: /var/run/dpdk/spdk_pid1973499 00:48:17.773 Removing: /var/run/dpdk/spdk_pid1974264 00:48:17.773 Removing: /var/run/dpdk/spdk_pid1975146 00:48:17.773 Removing: /var/run/dpdk/spdk_pid1975892 00:48:17.773 Removing: /var/run/dpdk/spdk_pid1976666 00:48:17.773 Removing: /var/run/dpdk/spdk_pid1977391 00:48:17.773 Removing: /var/run/dpdk/spdk_pid1982512 00:48:17.773 Removing: /var/run/dpdk/spdk_pid1982848 00:48:17.773 Removing: /var/run/dpdk/spdk_pid1989948 00:48:17.773 Removing: /var/run/dpdk/spdk_pid1990336 00:48:17.773 Removing: /var/run/dpdk/spdk_pid1996853 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2002045 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2014060 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2014824 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2019937 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2020360 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2025439 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2032301 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2035120 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2047375 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2058719 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2060652 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2061733 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2081256 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2086036 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2089257 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2097017 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2097028 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2103056 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2105426 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2108266 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2109532 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2111991 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2113393 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2123568 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2124074 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2124668 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2127563 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2128227 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2128807 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2133742 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2133834 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2135655 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2136094 00:48:17.773 Removing: /var/run/dpdk/spdk_pid2136321 00:48:17.773 Clean 00:48:18.035 14:44:21 -- common/autotest_common.sh@1451 -- # return 0 00:48:18.035 14:44:21 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:48:18.035 14:44:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:48:18.035 14:44:21 -- common/autotest_common.sh@10 -- # set +x 00:48:18.035 14:44:21 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:48:18.035 14:44:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:48:18.035 14:44:21 -- common/autotest_common.sh@10 -- # set +x 00:48:18.035 14:44:21 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:48:18.035 14:44:21 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:48:18.035 14:44:21 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:48:18.035 14:44:21 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:48:18.035 14:44:21 -- spdk/autotest.sh@394 -- # hostname 00:48:18.035 14:44:21 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:48:18.297 geninfo: WARNING: invalid characters removed from testname! 00:48:44.874 14:44:47 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:46.274 14:44:49 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:48.815 14:44:52 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:50.196 14:44:53 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:52.105 14:44:55 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:53.487 14:44:56 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:55.397 14:44:58 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:48:55.397 14:44:58 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:48:55.397 14:44:58 -- common/autotest_common.sh@1691 -- $ lcov --version 00:48:55.397 14:44:58 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:48:55.397 14:44:58 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:48:55.397 14:44:58 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:48:55.397 14:44:58 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:48:55.397 14:44:58 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:48:55.397 14:44:58 -- scripts/common.sh@336 -- $ IFS=.-: 00:48:55.397 14:44:58 -- scripts/common.sh@336 -- $ read -ra ver1 00:48:55.397 14:44:58 -- scripts/common.sh@337 -- $ IFS=.-: 00:48:55.397 14:44:58 -- scripts/common.sh@337 -- $ read -ra ver2 00:48:55.397 14:44:58 -- scripts/common.sh@338 -- $ local 'op=<' 00:48:55.397 14:44:58 -- scripts/common.sh@340 -- $ ver1_l=2 00:48:55.397 14:44:58 -- scripts/common.sh@341 -- $ ver2_l=1 00:48:55.397 14:44:58 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:48:55.397 14:44:58 -- scripts/common.sh@344 -- $ case "$op" in 00:48:55.397 14:44:58 -- scripts/common.sh@345 -- $ : 1 00:48:55.397 14:44:58 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:48:55.397 14:44:58 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:55.397 14:44:58 -- scripts/common.sh@365 -- $ decimal 1 00:48:55.397 14:44:58 -- scripts/common.sh@353 -- $ local d=1 00:48:55.397 14:44:58 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:48:55.397 14:44:58 -- scripts/common.sh@355 -- $ echo 1 00:48:55.397 14:44:58 -- scripts/common.sh@365 -- $ ver1[v]=1 00:48:55.397 14:44:58 -- scripts/common.sh@366 -- $ decimal 2 00:48:55.397 14:44:58 -- scripts/common.sh@353 -- $ local d=2 00:48:55.397 14:44:58 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:48:55.397 14:44:58 -- scripts/common.sh@355 -- $ echo 2 00:48:55.397 14:44:58 -- scripts/common.sh@366 -- $ ver2[v]=2 00:48:55.397 14:44:58 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:48:55.397 14:44:58 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:48:55.397 14:44:58 -- scripts/common.sh@368 -- $ return 0 00:48:55.397 14:44:58 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:55.397 14:44:58 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:48:55.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:55.397 --rc genhtml_branch_coverage=1 00:48:55.397 --rc genhtml_function_coverage=1 00:48:55.397 --rc genhtml_legend=1 00:48:55.397 --rc geninfo_all_blocks=1 00:48:55.397 --rc geninfo_unexecuted_blocks=1 00:48:55.397 00:48:55.397 ' 00:48:55.397 14:44:58 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:48:55.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:55.397 --rc genhtml_branch_coverage=1 00:48:55.397 --rc genhtml_function_coverage=1 00:48:55.397 --rc genhtml_legend=1 00:48:55.397 --rc geninfo_all_blocks=1 00:48:55.397 --rc geninfo_unexecuted_blocks=1 00:48:55.397 00:48:55.397 ' 00:48:55.397 14:44:58 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:48:55.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:55.397 --rc genhtml_branch_coverage=1 00:48:55.397 --rc genhtml_function_coverage=1 00:48:55.397 --rc genhtml_legend=1 00:48:55.397 --rc geninfo_all_blocks=1 00:48:55.397 --rc geninfo_unexecuted_blocks=1 00:48:55.397 00:48:55.397 ' 00:48:55.397 14:44:58 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:48:55.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:55.397 --rc genhtml_branch_coverage=1 00:48:55.397 --rc genhtml_function_coverage=1 00:48:55.397 --rc genhtml_legend=1 00:48:55.397 --rc geninfo_all_blocks=1 00:48:55.397 --rc geninfo_unexecuted_blocks=1 00:48:55.397 00:48:55.397 ' 00:48:55.397 14:44:58 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:48:55.397 14:44:58 -- scripts/common.sh@15 -- $ shopt -s extglob 00:48:55.397 14:44:58 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:48:55.397 14:44:58 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:55.397 14:44:58 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:55.397 14:44:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:55.397 14:44:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:55.397 14:44:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:55.397 14:44:58 -- paths/export.sh@5 -- $ export PATH 00:48:55.397 14:44:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:55.397 14:44:58 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:48:55.397 14:44:58 -- common/autobuild_common.sh@486 -- $ date +%s 00:48:55.397 14:44:58 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728823498.XXXXXX 00:48:55.397 14:44:58 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728823498.Pdxs6x 00:48:55.397 14:44:58 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:48:55.397 14:44:58 -- common/autobuild_common.sh@492 -- $ '[' -n main ']' 00:48:55.397 14:44:58 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:48:55.397 14:44:58 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:48:55.397 14:44:58 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:48:55.397 14:44:58 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:48:55.397 14:44:58 -- common/autobuild_common.sh@502 -- $ get_config_params 00:48:55.397 14:44:58 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:48:55.397 14:44:58 -- common/autotest_common.sh@10 -- $ set +x 00:48:55.397 14:44:58 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:48:55.397 14:44:58 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:48:55.397 14:44:58 -- pm/common@17 -- $ local monitor 00:48:55.397 14:44:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:55.397 14:44:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:55.397 14:44:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:55.397 14:44:58 -- pm/common@21 -- $ date +%s 00:48:55.397 14:44:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:55.397 14:44:58 -- pm/common@25 -- $ sleep 1 00:48:55.397 14:44:58 -- pm/common@21 -- $ date +%s 00:48:55.398 14:44:58 -- pm/common@21 -- $ date +%s 00:48:55.398 14:44:58 -- pm/common@21 -- $ date +%s 00:48:55.398 14:44:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728823498 00:48:55.398 14:44:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728823498 00:48:55.398 14:44:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728823498 00:48:55.398 14:44:58 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728823498 00:48:55.398 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728823498_collect-cpu-load.pm.log 00:48:55.398 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728823498_collect-vmstat.pm.log 00:48:55.398 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728823498_collect-cpu-temp.pm.log 00:48:55.398 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728823498_collect-bmc-pm.bmc.pm.log 00:48:56.340 14:44:59 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:48:56.340 14:44:59 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:48:56.340 14:44:59 -- spdk/autopackage.sh@14 -- $ timing_finish 00:48:56.340 14:44:59 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:48:56.340 14:44:59 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:48:56.340 14:44:59 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:48:56.340 14:44:59 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:48:56.340 14:44:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:48:56.340 14:44:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:48:56.340 14:44:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:56.340 14:44:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:48:56.340 14:44:59 -- pm/common@44 -- $ pid=2150430 00:48:56.340 14:44:59 -- pm/common@50 -- $ kill -TERM 2150430 00:48:56.340 14:44:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:56.340 14:44:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:48:56.340 14:44:59 -- pm/common@44 -- $ pid=2150431 00:48:56.340 14:44:59 -- pm/common@50 -- $ kill -TERM 2150431 00:48:56.340 14:44:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:56.340 14:44:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:48:56.340 14:44:59 -- pm/common@44 -- $ pid=2150433 00:48:56.340 14:44:59 -- pm/common@50 -- $ kill -TERM 2150433 00:48:56.340 14:44:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:48:56.340 14:44:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:48:56.340 14:44:59 -- pm/common@44 -- $ pid=2150461 00:48:56.340 14:44:59 -- pm/common@50 -- $ sudo -E kill -TERM 2150461 00:48:56.340 + [[ -n 1351422 ]] 00:48:56.340 + sudo kill 1351422 00:48:56.351 [Pipeline] } 00:48:56.368 [Pipeline] // stage 00:48:56.375 [Pipeline] } 00:48:56.389 [Pipeline] // timeout 00:48:56.393 [Pipeline] } 00:48:56.407 [Pipeline] // catchError 00:48:56.411 [Pipeline] } 00:48:56.426 [Pipeline] // wrap 00:48:56.431 [Pipeline] } 00:48:56.444 [Pipeline] // catchError 00:48:56.452 [Pipeline] stage 00:48:56.454 [Pipeline] { (Epilogue) 00:48:56.466 [Pipeline] catchError 00:48:56.467 [Pipeline] { 00:48:56.479 [Pipeline] echo 00:48:56.480 Cleanup processes 00:48:56.485 [Pipeline] sh 00:48:56.775 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:48:56.775 2150574 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:48:56.775 2151145 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:48:56.790 [Pipeline] sh 00:48:57.080 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:48:57.080 ++ grep -v 'sudo pgrep' 00:48:57.080 ++ awk '{print $1}' 00:48:57.080 + sudo kill -9 2150574 00:48:57.094 [Pipeline] sh 00:48:57.382 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:49:09.740 [Pipeline] sh 00:49:10.031 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:49:10.031 Artifacts sizes are good 00:49:10.046 [Pipeline] archiveArtifacts 00:49:10.054 Archiving artifacts 00:49:10.241 [Pipeline] sh 00:49:10.531 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:49:10.547 [Pipeline] cleanWs 00:49:10.558 [WS-CLEANUP] Deleting project workspace... 00:49:10.558 [WS-CLEANUP] Deferred wipeout is used... 00:49:10.565 [WS-CLEANUP] done 00:49:10.567 [Pipeline] } 00:49:10.584 [Pipeline] // catchError 00:49:10.595 [Pipeline] sh 00:49:10.884 + logger -p user.info -t JENKINS-CI 00:49:10.894 [Pipeline] } 00:49:10.908 [Pipeline] // stage 00:49:10.913 [Pipeline] } 00:49:10.927 [Pipeline] // node 00:49:10.932 [Pipeline] End of Pipeline 00:49:10.980 Finished: SUCCESS